Hi Team,
I was trying to forcefully lost the unfound objects using the below
commands mentioned in the documentation , it is not working in the latest
release , any prerequisites required for EC pool.
cn1.chn6m1c1ru1c1.cdn ~# *ceph pg 4.1206 mark_unfound_lost revert|delete*
-bash: delete: comman
Hi,
I have a similar issue and would also need some advice how to get rid
of the already deleted files.
Ceph is our OpenStack backend and there was a nova clone without
parent information. Apparently, the base image had been deleted
without a warning or anything although there were existi
EC pools only support deleting unfound objects as there aren't multiple
copies around that could be reverted to.
ceph pg mark_unfound_lost delete
Paul
2018-05-08 9:26 GMT+02:00 nokia ceph :
> Hi Team,
>
> I was trying to forcefully lost the unfound objects using the below
> commands mentione
It's a very bad idea to accept data if you can't guarantee that it will be
stored in way that tolerates a disk outage
without data loss. Just don't.
Increase the number of coding chunks to 3 if you want to withstand two
simultaneous disk
failures without impacting availability.
Paul
2018-05-08
Thank you , it works
On Tue, May 8, 2018 at 2:05 PM, Paul Emmerich
wrote:
> EC pools only support deleting unfound objects as there aren't multiple
> copies around that could be reverted to.
>
> ceph pg mark_unfound_lost delete
>
>
> Paul
>
> 2018-05-08 9:26 GMT+02:00 nokia ceph :
>
>> Hi Team,
On Mon, May 7, 2018 at 8:50 PM, Ryan Leimenstoll
wrote:
> Hi All,
>
> We recently experienced a failure with our 12.2.4 cluster running a CephFS
> instance that resulted in some data loss due to a seemingly problematic OSD
> blocking IO on its PGs. We restarted the (single active) mds daemon durin
Hello Jean-Charles!
I have finally catch the problem, It was at 13-02.
[cephuser@storage-ru1-osd3 ~]$ ceph health detail
HEALTH_WARN 18 slow requests are blocked > 32 sec
REQUEST_SLOW 18 slow requests are blocked > 32 sec
3 ops are blocked > 65.536 sec
15 ops are blocked > 32.768 sec
2018-05-08 1:46 GMT+02:00 Maciej Puzio :
> Paul, many thanks for your reply.
> Thinking about it, I can't decide if I'd prefer to operate the storage
> server without redundancy, or have it automatically force a downtime,
> subjecting me to a rage of my users and my boss.
> But I think that the ty
Perhaps the image had associated snapshots? Deleting the object
doesn't delete the associated snapshots so those objects will remain
until the snapshot is removed. However, if you have removed the RBD
header, the snapshot id is now gone.
On Tue, May 8, 2018 at 12:29 AM, Eugen Block wrote:
> Hi,
>
Hi Grigory,
are these lines the only lines in your log file for OSD 15?
Just for sanity, what are the log levels you have set, if any, in your config
file away from the default? If you set all log levels to 0 like some people do
you may want to simply go back to the default by commenting out th
(newbie warning - my first go-round with ceph, doing a lot of reading)
I have a small Ceph cluster, four storage nodes total, three dedicated to
data (OSD’s) and one for metadata. One client machine.
I made a network change. When I installed and configured the cluster, it was
done
using the syst
On Tue, May 8, 2018 at 3:50 PM, James Mauro wrote:
> (newbie warning - my first go-round with ceph, doing a lot of reading)
>
> I have a small Ceph cluster, four storage nodes total, three dedicated to
> data (OSD’s) and one for metadata. One client machine.
>
> I made a network change. When I ins
Something simple like `s3cmd put file s3://bucket/file --acl-public`
On Sat, May 5, 2018 at 6:36 AM Marc Roos wrote:
>
>
> What would be the best way to implement a situation where:
>
> I would like to archive some files in lets say an archive bucket and use
> a read/write account for putting th
Didn't mean to hit send on that quite yet, but that's the gist of
everything you need to do. There is nothing special about this for RGW vs
AWS except that AWS can set this permission on a full bucket while in RGW
you need to do this on each object when you upload them.
On Tue, May 8, 2018 at 12:
The mons work best when they know absolutely everything. If they know that
osd.3 was down 40 seconds before osd.2 that means that if a write was
stilling happening while osd.2 was still up, the mons have record of it in
the maps and when osd.3 comes up, it can get what it needs from the other
osds
You talked about "using default settings wherever possible"... Well, Ceph's
default settings everywhere they exist, is to not allow you to write while
you don't have at least 1 more copy that you can lose without data loss.
If your bosses require you to be able to lose 2 servers and still serve
cus
Sorry I've been on vacation, but I'm back now. The command I use to create
subusers for a rgw user is...
radosgw-admin user create --gen-access-key --gen-secret --uid=user_a
--display_name="User A"
radosgw-admin subuser create --gen-access-key --gen-secret
--access={read,write,readwrite,full} --k
On Mon, May 7, 2018 at 2:26 PM, Maciej Puzio wrote:
> I am an admin in a research lab looking for a cluster storage
> solution, and a newbie to ceph. I have setup a mini toy cluster on
> some VMs, to familiarize myself with ceph and to test failure
> scenarios. I am using ceph 12.2.4 on Ubuntu 18.
On Tue, May 8, 2018 at 7:35 PM, Vasu Kulkarni wrote:
> On Mon, May 7, 2018 at 2:26 PM, Maciej Puzio wrote:
>> I am an admin in a research lab looking for a cluster storage
>> solution, and a newbie to ceph. I have setup a mini toy cluster on
>> some VMs, to familiarize myself with ceph and to tes
On Tue, May 8, 2018 at 12:07 PM, Dan van der Ster wrote:
> On Tue, May 8, 2018 at 7:35 PM, Vasu Kulkarni wrote:
>> On Mon, May 7, 2018 at 2:26 PM, Maciej Puzio wrote:
>>> I am an admin in a research lab looking for a cluster storage
>>> solution, and a newbie to ceph. I have setup a mini toy clu
Hi Gregg, John,
Thanks for the warning. It was definitely conveyed that they are dangerous. I
thought the online part was implied to be a bad idea, but just wanted to verify.
John,
We were mostly operating off of what the mds logs reported. After bringing the
mds back online and active, we mo
My cluster got stuck somehow, and at one point in trying to recycle things to
unstick it, I ended up shutting down everything, then bringing up just the
monitors. At that point, the cluster reported the status below.
With nothing but the monitors running, I don't see how the status can say
there
Thank you everyone for your replies. However, I feel that at least
part of the discussion deviated from the topic of my original post. As
I wrote before, I am dealing with a toy cluster, whose purpose is not
to provide a resilient storage, but to evaluate ceph and its behavior
in the event of a fai
Hello Jason,
Am 8. Mai 2018 15:30:34 MESZ schrieb Jason Dillaman :
>Perhaps the image had associated snapshots? Deleting the object
>doesn't delete the associated snapshots so those objects will remain
>until the snapshot is removed. However, if you have removed the RBD
>header, the snapshot id
We recently began our upgrade testing for going from Jewel (10.2.10) to
Luminous (12.2.5) on our clusters. The first part of the upgrade went
pretty smoothly (upgrading the mon nodes, adding the mgr nodes, upgrading
the OSD nodes), however, when we got to the RGWs we started seeing internal
server
Hi Everyone,
We run some hosts with Proxmox 4.4 connected to our ceph cluster for
RBD storage. Occasionally we get a vm suddenly stop with no real
explanation. The last time this happened to one particular vm I turned
on some qemu logging via Proxmox Monitor tab for the vm and got this
dump this ti
26 matches
Mail list logo