Hi Casey, thanks for this info. It’s been doing something for 36 hours, but not
updating the status at all. So it either takes a really long time for
“preparing for full sync” or I’m doing something wrong. This is helpful
information, but there’s a myriad of states that the system could be in.
Hello,
Few month ago we experienced issue with Ceph v13.2.4:
1. One of the nodes had all it's osd's set to out. To clean them up for
replacement.
2. Noticed that a lot of snaptrim was running.
3. Set nosnaptrim flag on the cluster (to improve performance).
4. Once mon_osd_snap_trim_queue_warn_o
Let me try to reproduce this on centos 7.5 with master and I'll let
you know how I go.
On Thu, Apr 18, 2019 at 3:59 PM Can Zhang wrote:
>
> Using the commands you provided, I actually find some differences:
>
> On my CentOS VM:
> ```
> # sudo find ./lib* -iname '*.so*' | xargs nm -AD 2>&1 | grep
Hello,
I have one osd which cant start and giving out above errror. Everything was
running ok until last night when the interface card of the server hosting
this osd went fault.
we replaced the fault interface and others OSD started well except one OSD
We are running ceph 14.2.0 and all OSD are ru
Hi,
the Ceph iSCSI gateway has a problem when receiving discovery auth
requests when discovery auth is not enabled. Target discovery fails in
this case (see below). This is especially annoying with oVirt (KVM
management platform) where you can't separate the two authentication
phases. This le
Hi,
I am trying to setup Ceph through Docker inside a VM. My host machine
is Mac. My VM is an Ubuntu 18.04. Docker version is 18.09.5, build
e8ff056.
I am following the documentation present on ceph/daemon Docker Hub
page. The idea is, if I spawn docker containers as mentioned on the
page, I should
I have been looking a bit at the s3 clients available to be used, and I
think they are quite shitty, especially this Cyberduck that processes
files with default reading rights to everyone. I am in the process to
advice clients to use for instance this mountain duck. But I am not to
happy abou
Call for Submission
*Deadline*: 10 June 2019 AoE
The IO500 is now accepting and encouraging submissions for the upcoming 4th
IO500 list to be revealed at ISC-HPC 2019 in Frankfurt, Germany. Once
again, we are also accepting submissions to the 10 node I/O challenge to
encourage submission of small
Hi Marc
Filezilla has decent S3 support https://filezilla-project.org/
ymmv of course!
On Thu, Apr 18, 2019 at 2:18 PM Marc Roos wrote:
>
>
> I have been looking a bit at the s3 clients available to be used, and I
> think they are quite shitty, especially this Cyberduck that processes
> files w
Yea, that was a cluster created during firefly...
Wish there was a good article on the naming and use of these, or perhaps a way
I could make sure they are not used before deleting them. I know RGW will
recreate anything it uses, but I don’t want to lose data because I wanted a
clean system.
That’s good to know as well, I was seeing the same thing. I hope this is just
an informational message though.
-Brent
-Original Message-
From: ceph-users On Behalf Of Mark Schouten
Sent: Tuesday, April 16, 2019 3:15 AM
To: Igor Podlesny ; Sinan Polat
Cc: Ceph Users
Subject: Re: [ceph
Hello,
I have a server with 18 disks, and 17 OSD daemons configured. One of the
OSD daemons failed to deploy with ceph-deploy. The reason for failing is
unimportant at this point, I believe it was race condition, as I was
running ceph-deploy inside while loop for all disks in this server.
Now I
On Thu, Apr 18, 2019 at 10:55 AM Sergei Genchev wrote:
>
> Hello,
> I have a server with 18 disks, and 17 OSD daemons configured. One of the OSD
> daemons failed to deploy with ceph-deploy. The reason for failing is
> unimportant at this point, I believe it was race condition, as I was running
Hey,
Im running a new ceph 13 cluster, using just one cephfs, 6.3 erasure
encoded stripe pool, each osd is 10T hdd, 20 total, all on there own host.
Storing mostly large files ~20G. I'm running mostly stock except that I've
optimized for the low (2G) memory hosts based an old threads
recommendatio
Hi !
I am not 100% sure, but i think, --net=host does not propagate /dev/
inside the conatiner.
From the Error Message :
2019-04-18 07:30:06 /opt/ceph-container/bin/entrypoint.sh: ERROR- The
device pointed by OSD_DEVICE (/dev/vdd) doesn't exist !
I whould say, you should add something like
The ansible deploy is quite a pain to get set up properly, but it does
work to get the whole stack working under Docker. It uses the following
script on Ubuntu to start the OSD containers:
/usr/bin/docker run \
--rm \
--net=host \
--privileged=true \
--pid=host \
--memory=64386m \
https://www.reddit.com/r/netsec/comments/8t4xrl/filezilla_malware/
not saying it definitely is, or isn't malware-ridden, but it sure was shady
at that time.
I would suggest not pointing people to it.
Den tors 18 apr. 2019 kl 16:41 skrev Brian : :
> Hi Marc
>
> Filezilla has decent S3 support ht
Thank you Alfredo
I did not have any reasons to keep volumes around.
I tried using ceph-volume to zap these stores, but none of the command
worked, including yours 'ceph-volume lvm zap
osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz'
I ended up manually removing LUKS volumes and then deleting
On Thu, Apr 18, 2019 at 3:01 PM Sergei Genchev wrote:
>
> Thank you Alfredo
> I did not have any reasons to keep volumes around.
> I tried using ceph-volume to zap these stores, but none of the command
> worked, including yours 'ceph-volume lvm zap
> osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-
I am trying to determine some sizing limitations for a potential iSCSI
deployment and wondering whats still the current lay of the land:
Are the following still accurate as of the ceph-iscsi-3.0 implementation
assuming CentOS 7.6+ and the latest python-rtslib etc from shaman:
* Limit of 4
# ceph-volume lvm zap --destroy
osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
Running command: /usr/sbin/cryptsetup status /dev/mapper/
--> Zapping: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
--> Destroying physical volume
osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz because --de
On Thu, Apr 18, 2019 at 9:53 PM Siegfried Höllrigl
wrote:
>
> Hi !
>
> I am not 100% sure, but i think, --net=host does not propagate /dev/
> inside the conatiner.
>
> From the Error Message :
>
> 2019-04-18 07:30:06 /opt/ceph-container/bin/entrypoint.sh: ERROR- The
> device pointed by OSD_DEVIC
22 matches
Mail list logo