Hello.
I've made list all buckets request and got the following response (the
part of it):
http://s3.amazonaws.com/doc/2006-03-01/";>someownerSOMEOWNER
note the "someowner" is used as id.
Problem that S3-compatible library that I use crashes on this, it
expects 64 character hex string.
Accord
On 16-07-15 10:40, Oliver Dzombic wrote:
Hi,
Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5
First sector: 256 (at 1024.0 KiB)
Last sector: 976754640 (at 3.6 TiB)
Partition size: 976754385 sectors (3.6 TiB)
Attribute
On 16-07-18 10:53, Henrik Korkuc wrote:
On 16-07-15 10:40, Oliver Dzombic wrote:
Hi,
Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5
First sector: 256 (at 1024.0 KiB)
Last sector: 976754640 (at 3.6 TiB)
Partition si
hello cepher! I have a problem like this : I want to config a cache
tiering to my ceph with writeback mode.In ceph-0.94,it runs ok. IO is First
through hot-pool. then it flush to cold-pool. But in ceph-10.2.2,it doesn't
like tihs. IO wrties to hot-pool and cold-pool at the same time. I
Hi,
osd_tier_promote_max_bytes_sec
is your friend.
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 93402 beim Amtsgericht Hanau
Geschäftsführ
Guys,
This bug is hitting me constantly, may be once per several days. Does
anyone know is there a solution already?
2016-07-05 11:47 GMT+03:00 Nick Fisk :
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Alex Gorbachev
>> Sent: 04 July
what is "osd_tier_promote_max_bytes_sec" in ceph.conf file and command "ceph
osd pool set ssd-pool target_max_bytes" are not the same ?
On Monday, July 18, 2016 4:40 PM, Oliver Dzombic
wrote:
Hi,
osd_tier_promote_max_bytes_sec
is your friend.
--
Mit freundlichen Gruessen / Best re
Hi
i suggest you to read some base docu about that.
osd_tier_promote_max_bytes_sec = how much bytes per second are going on tier
ceph osd pool set ssd-pool target_max_bytes = maximum size in bytes on
this specific pool ( its like a quota )
--
Mit freundlichen Gruessen / Best regards
Oliver Dz
Where to find base docu?Official website does not update the document
On Monday, July 18, 2016 5:16 PM, Oliver Dzombic
wrote:
Hi
i suggest you to read some base docu about that.
osd_tier_promote_max_bytes_sec = how much bytes per second are going on tier
ceph osd pool set ssd-pool t
Hi,
everything is here:
http://docs.ceph.com/docs/jewel/
except
osd_tier_promote_max_bytes_sec
and other stuff, but its enough there that you can make it work.
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP Interactiv
thank you very much!
On Monday, July 18, 2016 5:31 PM, Oliver Dzombic
wrote:
Hi,
everything is here:
http://docs.ceph.com/docs/jewel/
except
osd_tier_promote_max_bytes_sec
and other stuff, but its enough there that you can make it work.
--
Mit freundlichen Gruessen / Best regard
Hi All,
I quite new to ceph but did a initial setup on these Virtual Machines:
1x Ceph admin
3 x Ceph mons
3x Ceph OSD's
each osd has 3x 100GB drives, and 3x 20GB journals
After initial setup of Ceph and running # ceph healt I get the following error
Any help would be appreciated!
Hi,
please show the output of:
ceph osd pool ls detail
also
ceph health detail
please.
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 934
Nobody? Is it at least possible with jewel to give the sockets group
write permissions?
Am 10.07.2016 um 23:51 schrieb Stefan Priebe - Profihost AG:
> Hi,
>
> is there a proposed way how to connect from non root f.e. a monitoring
> system to the ceph admin socket?
>
> In the past they were crea
On 16-07-18 11:11, Henrik Korkuc wrote:
On 16-07-18 10:53, Henrik Korkuc wrote:
On 16-07-15 10:40, Oliver Dzombic wrote:
Hi,
Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5
First sector: 256 (at 1024.0 KiB)
Last se
On Mon, Jul 18, 2016 at 12:20 PM, Henrik Korkuc wrote:
> This file was removed by Sage:
>
> commit 9f76b9ff31525eac01f04450d72559ec99927496
> Author: Sage Weil
> Date: Mon Apr 18 09:16:02 2016 -0400
>
> udev: remove 60-ceph-partuuid-workaround-rules
>
> These were added to get /dev/disk
Hi guys.
Could you help me with some small trouble?
We have new installation ceph version 10.2.2 and we have some
interesting trouble with auto mounting osd after reboot storage node.
We forced to mount osd manual after reboot, and osd work fine.
But in previous version 0.94.5 it was automat
On 16-07-18 13:37, Eduard Ahmatgareev wrote:
Hi guys.
Could you help me with some small trouble?
We have new installation ceph version 10.2.2 and we have some
interesting trouble with auto mounting osd after reboot storage
node. We forced to mount osd manual after reboot, and osd work fine.
> Op 18 juli 2016 om 11:49 schreef Ivan Koortzen :
>
>
> Hi All,
>
> I quite new to ceph but did a initial setup on these Virtual Machines:
>
> 1x Ceph admin
> 3 x Ceph mons
> 3x Ceph OSD's
>
> each osd has 3x 100GB drives, and 3x 20GB journals
>
> After initial setup of Ceph and runnin
I assume you installed Ceph using 'ceph-deploy'. I noticed the same
thing on CentOS when deploying a cluster for testing...
As Wido already noted the OSDs are marked as down & out. From each OSD
node you can do a "ceph-disk activate-all" to start the OSDs.
On Mon, Jul 18, 2016 at 12:59 PM, Wido d
Any chance you can zip up the raw LTTng-UST files and attach them to
the ticket? It appears that the rbd-replay-prep tool doesn't record
translate discard events.
The change sounds good to me -- but it would also need to be made in
librados and ceph-osd since I'm sure they would have the same issu
Hi all
Recursive statistics on directories are no longer showing on an ls -l
output but getfattr is accurate:
# ls -l
total 0
drwxr-xr-x 1 root root 3 Jul 18 12:42 dir1
drwxr-xr-x 1 root root 0 Jul 18 12:42 dir2
]# getfattr -d -m ceph.dir.* dir1
# file: dir1
ceph.dir.entries="3"
ceph.dir.files="
On Mon, Jul 18, 2016 at 9:00 PM, David wrote:
> Hi all
>
> Recursive statistics on directories are no longer showing on an ls -l output
> but getfattr is accurate:
>
> # ls -l
> total 0
> drwxr-xr-x 1 root root 3 Jul 18 12:42 dir1
> drwxr-xr-x 1 root root 0 Jul 18 12:42 dir2
>
> ]# getfattr -d -m
Hi,
Is this disabled because its not a stable feature or just user preference?
Thanks
On Mon, Jul 18, 2016 at 2:37 PM, Yan, Zheng wrote:
> On Mon, Jul 18, 2016 at 9:00 PM, David wrote:
> > Hi all
> >
> > Recursive statistics on directories are no longer showing on an ls -l
> output
> > but ge
Hi all,
I seem to have forgotten to mention my setup. I have
Ceph Hammer (ceph version 0.94.7
(d56bdf93ced6b80b07397d57e3fa68fe68304432)
CentOS 7.2 w/ Linux 4.4.13
The pool in question is an EC pool on SSD with an SSD cache pool in
front for RBD.
I've done some more digging and I really don't
Hi,
We recently upgraded our Ceph Cluster to Jewel including RGW. Everything seems
to be in order except for RGW which doesn't let us create buckets or add new
files.
# s3cmd --version
s3cmd version 1.6.1
# s3cmd mb s3://test
WARNING: Retrying failed request: /
WARNING: 500 (UnknownError)
WARN
Updated the issue with zipped copies of raw LTTng files. Thanks for
taking a look!
I will also look at fixing the linking issue on librados/ceph-osd side
and send a PR up.
On 07/18, Jason Dillaman wrote:
Any chance you can zip up the raw LTTng-UST files and attach them to
the ticket? It appe
Thanks Zheng, I should have checked that.
Sean, from the commit:
When rbytes mount option is enabled, directory size is recursive size.
Recursive size is not updated instantly. This can cause directory size to
change between successive stat(1)
On Mon, Jul 18, 2016 at 2:49 PM, Sean Redmond
wrote
Specifically, this has caused trouble with certain (common?) rsync
configurations.
-Greg
On Monday, July 18, 2016, David wrote:
> Thanks Zheng, I should have checked that.
>
> Sean, from the commit:
>
> When rbytes mount option is enabled, directory size is recursive size.
> Recursive size is no
Patrick Donnelly пишет:
>> Infernalis: e5165: 1/1/1 up {0=c=up:active}, 1 up:standby-replay, 1
>> up:standby
>>
>> Now after upgrade start and next mon restart, active monitor falls with
>> "assert(info.state == MDSMap::STATE_STANDBY)" (even without running mds) .
>
> This is the first time you'
I'm not familiar with how it's set up but skimming and searching
through the code I'm not seeing anything, no. We've got a chown but no
chmod. That's a reasonably feature idea though, and presumably you
could add a chmod it to your init scripts?
-Greg
On Mon, Jul 18, 2016 at 3:02 AM, Stefan Priebe
On Mon, Jul 18, 2016 at 10:48:16AM +0300, Victor Efimov wrote:
> xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>someownerSOMEOWNER
>
> note the "someowner" is used as id.
> Problem that S3-compatible library that I use crashes on this, it
> expects 64 character hex string.
>
> According to S3
2016-07-19 1:21 GMT+03:00 Robin H. Johnson :
> On Mon, Jul 18, 2016 at 10:48:16AM +0300, Victor Efimov wrote:
>> > xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>someownerSOMEOWNER
>>
>> note the "someowner" is used as id.
>> Problem that S3-compatible library that I use crashes on this, it
>> ex
Dear Cephers:
I have two questions that needs advice.
1) If there is a OSD disk failure (for example, pulling disk out), how long
does the osd daemon detect the disk failure? and how long does the ceph
cluster mark this osd daemon down?
Is there any config option to allow the ceph cluster to det
Hi,
I have created a cluster with the below configuration:
- 6 Storage nodes, each with 20 disks
- I have total of 120 OSDs
Cluster was working fine. All of a sudden today morning I noticed some OSD's
(7 to be exact) were down on one server.
I rebooted the server, 4 OSDs came back. Three OSD
The first question I have is to understand why some disks/OSDs showed status of
'DOWN' - there was no activity on the cluster. Last night all the OSDs were
up. What can cause OSDs to go down?
- epk
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of EP
Komarla
Sent: Mond
robocopy in Windows have flag /MT:N, where N - thread's count. With MT:24 I
have 20-30MB/sec copy from one VM instance to another. It's all after disabling
scrub in working time.
>Вторник, 12 июля 2016, 5:44 +05:00 от Christian Balzer :
>
>
>Hello,
>
>scrub settings will only apply to new scru
Hi All...
We do have some good news.
As promised, I've recompiled ceph 10.2.2 (in an intel processor without
AVX2) with and without the patch provided by Zheng. It turns out that
Zheng's patch _*is*_ the solution for the segfaults we saw in
ObjectCacher when ceph-fuse runs in AMD 62xx process
I have configured ceph.conf with "osd_tier_promote_max_bytes_sec" in [osd]
Attributes. But it still invalid.I do command --show-config discovery that it
has not been modified.
[root@node01 ~]# cat /etc/ceph/ceph.conf | grep tier
osd_tier_promote_max_objects_sec=20
osd_tier_promote_max_bytes_s
Hi Maciej,
we also had problems when upgrading our infernalis RGW cluster to jewel. In
the end I managed to upgrade with the help of a script (from Yehuda). Search
for the thread "[ceph-users] radosgw hammer -> jewel upgrade (default zone &
region config)" on the mailing list. There you can find
40 matches
Mail list logo