Hi,
Is that all objects of a file will be stored in only 2 OSD(in case of
replication count is 2)?
How Big is this file? Small files will not be splitted.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
Hi,
I'm Running 12.2.5 and I have no Problems at the moment.
However my servers reporting daily that they want to upgrade to 12.2.7, is this
save or should I wait for 12.2.8?
Are there any predictions when the 12.2.8 release will be available?
Micha K
blems deleting some buckets that had multiple reshards done because of
missing objects (Maybe objects where deleted during a dynamic reshard, and this
was not recorded to
the indexes).
So for the time being I disabled dynamic resharding again.
Micha K
pg
scrub $pg; done
After just a few seconds my pool started flushing and evicting again.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
re aborted/completed during
resharding?
Micha Krause
On 04.04.2018 16:14, Micha Krause wrote:
Hi,
I have a Bucket with multiple broken multipart uploads, which can't be aborted.
radosgw-admin bucket check shows thousands of _multipart_ objects,
unfortunately the --fix and --check-objec
hg9.meta
emptyfile
in it's place, but the error stays the same.
Any ideas how I can get rid of my bucket?
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
agreed. but the packages built for stretch do depend on the library
I had a wrong debian version in my sources list :-(
Thanks for looking into it.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
://packages.debian.org/search?keywords=libsnappy1&searchon=names&suite=all§ion=all
https://packages.debian.org/search?suite=all§ion=all&arch=any&searchon=names&keywords=libleveldb1
They should Probably depend on the 1v5 packages, and they did in version 12
there any configuration options to reduce this impact? or limit resharding
to a maximum of 256 shards?
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
r/public) or am I
better off using 1 x 40GB/s (shared)?
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
xed in Luminous for that configuration.
No, Im using active/backup configuration.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ixed negative inode count";
But my compiler yelled at me for trying this.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
The question is: how can I get rid of this inode?
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
indeed, I am able to prevent the crash by running:
root@mds02:~ # ceph --admin-daemon /var/run/ceph/ceph-mds.1.asok force_readonly
during startup of the mds.
Any advice on how to repair the filesystem?
I already tried this without success:
http://
th a kraken libcephfs instead of a jewel version
both errors went away.
I'm sure using a compiled Version from the repo you mention would have worked
out of the box.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:
ver directories
and files can be accessed, and ls works in subdirectories.
2. I can't create devices in the nfs mount, not sure if ganesha supports this
with other backends.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.
n data.
Has anyone gotten this to work, and maybe could give me a hint on what I'm
doing wrong?
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
If you haven't already installed the previous branch, please try
wip-msgr-jewel-fix2 instead. That's a cleaner and more precise
solution to the real problem. :)
Any predictions when this fix will hit the Debian repositories?
Mi
kernel that comes with XenServer
7
I don't know, is XenServer really using the kernel-rbd, and not librbd?
Just want to make sure you aren't looking at the wrong thing to update.
Micha Krause
___
ceph-users mailing list
ceph-user
other osd Server.
2. Bad idea, but could work: build your crush rule manually, e.g.: set all
primary pgs to host ceph1, first copy to host ceph2 and second copy to host3.
Micha Krause
Am 08.07.2016 um 05:47 schrieb Nathanial Byrnes:
Hello,
I've got a Jewel Cluster (3 nodes, 15 OSD'
lso suggest creating a new crush rule, instead of modifying your
existing one.
This enables you to change the rule on a per pool basis:
ceph osd pool set crush_rulenum
Then start with your smallest pool, and see how it goes.
Micha Krause
___
ceph-us
ados-gateways
2. Upgrade rados-gateways to jewel
3. Run less scary script
4. Start rados-gateways
This whole thing is a serious problem, there should at least be a clear notice
in the Jewel release notes about this. I was lucky to catch this in my
test-cluster,
I'm sure a lot of people
*bump*
Am 01.07.2016 um 13:00 schrieb Micha Krause:
Hi,
> In Infernalis there was this command:
radosgw-admin regions list
But this is missing in Jewel.
Ok, I just found out that this was renamed to zonegroup list:
root@rgw01:~ # radosgw-admin --id radosgw.rgw zonegroup l
",
"zonegroups": [
"default"
]
}
This looks to me like there is indeed only one zonegroup or region configured.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
egions I have.
In Infernalis there was this command:
radosgw-admin regions list
But this is missing in Jewel.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
If i try to create a bucket (using s3cmd) im getting this error:
WARNING: 500 (UnknownError):
The rados-gateway server says:
ERROR: endpoints not configured for upstream zone
The Servers where updated to jewel, but I'm not sure the error wasn't
there before.
Mi
ase:
micha@micha:~$ host *.rgw.noris.net
*.rgw.noris.net has address 62.128.8.6
*.rgw.noris.net has address 62.128.8.7
*.rgw.noris.net has IPv6 address 2001:780:6::6
*.rgw.noris.net has IPv6 address 2001:780:6::7
Micha Krause
___
ceph-users mailing list
ceph-us
2 | wc -l
2228777
So this data is then stored in the omap directory on my osd as .sst files?
is there a way to correlate a rados object with a specific sst (leveldb?) file?
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists
0B object make any difference?
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
45def.2ae8944a.0010459f
??? write
3628813 osd18 37.69e6111d rb.0.1345def.2ae8944a.001045a1
??? write
this output does not change.
dmesg shows hung task stuff again, but no rbd related lines.
Micha Krause
y 3.12
tomorrow and
report back.
Ok, I have tested 3.12.9 and it also hangs.
I have no other pre-build kernels to test :-(.
If I have to compile Kernels anyway I will test 3.16.3 as well :-/.
Micha Krause
___
ceph-users mailing list
ceph-users@lis
.12
tomorrow and
report back.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
workload is quite different to the nfs gateway
server running on Debian.
On the gateway I have tested 3.13.10 and 3.14.12 and about 30min after I/O
starts, rbd hangs.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/l
Hi,
things work fine on kernel 3.13.0-35
I can reproduce this on 3.13.10, and I had in once on 3.13.0-35 as well.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
osdc)
at least for 10 minutes nothing happened here.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
+0x70/0x70
[] ret_from_fork+0x7c/0xb0
[] ? kthread_freezable_should_stop+0x70/0x70
Micha Krause
Am 23.09.2014 um 15:37 schrieb Micha Krause:
bump
I have observed this crash on ubuntu with kernel 3.13 and centos with 3.16 as
well now.
rbd hangs, and iostat shows something similar to the Outp
bump
I have observed this crash on ubuntu with kernel 3.13 and centos with 3.16 as
well now.
rbd hangs, and iostat shows something similar to the Output below.
Micha Krause
Am 19.09.2014 um 09:22 schrieb Micha Krause:
Hi,
> I have build an NFS Server based on Sebastiens Blog Post h
removed.
Why are they still known to the rbd client? The OSDs where removed before the
client was booted.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
crashes multiple times per day, I can't even login to the Server
then.
After a reset, there is no kernel log about the crash, so I guess something is
blocking
all I/Os.
Any ideas on how to debug this?
Micha Krause
___
ceph-users mailing list
ceph-
Hi,
> Have you confirmed that if you unmount cephfs on /srv/micha the NFS export
works?
Yes, im probably hitting this bug: http://tracker.ceph.com/issues/7750
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.
ecret=,nodcache,nofsc,acl)
This is probably my problem, it works if I export the cephfs root :-( :
http://tracker.ceph.com/issues/7750
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
any ideas?
Micha Krause
Am 11.08.2014 16:34, schrieb Micha Krause:
Hi,
Im trying to build a cephfs to nfs gateway, but somehow i can't mount the share
if it is backed by cephfs:
mount ngw01.ceph:/srv/micha /mnt/tmp/
mount.nfs: Connection timed out
cephfs mount on the ga
Hi,
> The NFS crossmnt options can help you.
Thanks for the suggestion, I tried it, but it makes no difference.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ernel-server.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ht. (For instance, if one of your OSDs is at 90% and the others are
at 50%, you could reduce this weight to try and compensate for it.)
thanks, so if I have some older osds, and I want them to receive less data/iops
than the other nodes, I would use "ceph osd
Hi,
could someone explain to me what the difference is between
ceph osd reweight
and
ceph osd crush reweight
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
, one with jessie, one with
wheezy + backports kernel.
Is there some config-option to enable snapshots, or is this a bug?
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
>> So how does AWS S3 handle Public access to objects?
You have to explicitly set public ACL on each object.
Ok, but this also does not work with radosgw + s3cmd:
s3cmd setacl -P s3://test/fstab
ERROR: S3 error: 403 (AccessDenied):
Micha
Hi,
> Note this breaks AWS S3 compatibility and is why it is a configurable.
So how does AWS S3 handle Public access to objects?
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-us
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
No solution so far, but I also asked in IRC and linuxkidd told me they
where looking for a workaround.
Micha Krause
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net
s:put_acls:http status=403
2013-11-08 13:56:55.094209 7fe3314c6700 1 == req done req=0xf68e20
http_status=403 ==
2013-11-08 13:57:03.324082 7fe35d922700 2
RGWDataChangesLog::ChangesRenewThread: start
2013-11-08 13:57:25.324242 7fe35d922700 2
RGWDataChangesLog::ChangesRenewThread: star
51 matches
Mail list logo