The OS can have some compatibility issues with the printer and that might
result in Epson Error Code 0x97. Therefore, to deal with the issue you can use
the help steps that are mentioned in the tech consultancy sites or you can also
get the issue resolved by using the assistance that is availabl
On 9/9/20 11:05 AM, Jayesh Labade wrote:
Do I need to perform any cleanup to delete benchmark data from osd ?
Nope.
k
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi *,
I'm wondering about what actually happens in the ceph cluster if I
copy/sync the content of one bucket into a different bucket. I'll just
describe what I saw and maybe someone could clarify what is happening.
I have a RGW in a small test cluster (15.2.2) and created a bucket
(bucket
Hi,
I'm preparing three Ceph Nautilus nodes as iSCSI gateways following this
documentation: https://docs.ceph.com/docs/nautilus//rbd/iscsi-overview/.
For some reason rbd-target-api for one node doesn't agree on
/etc/ceph/iscsi-gateway.cfg although "sha256sum
/etc/ceph/iscsi-gateway.cfg" is th
Den ons 9 sep. 2020 kl 10:06 skrev Eugen Block :
> Hi *,
>
> I'm wondering about what actually happens in the ceph cluster if I
> copy/sync the content of one bucket into a different bucket.
>
> How does this work? It seems as if there's (almost) no client traffic
> (except for the cp command, of
I think rgw will make a header that points to the original data only, so
you are right in that there is no huge data copy operation.
Alright, that would explain it. But what happens when I overwrite the
object in bucket1 with different content? Because I'm still able to
get the original cont
Dear All
I build 2 ceph poc cluster
My ceph version is octopus 15.2.4 with centos 8
I configure ceph multisite is working
Cluster_1
realm 20f3a458-b0d4-4699-b8f7-75411366c635 (global)
zonegroup 94b70bdb-1492-49b7-8cda-210d3552ecd2 (data)
zone bb4106bc-6a01-44e7-be53-ce8c5dbbc
Hi Norman,
not pretending to know the exact root cause but IMO one of the working
hypothesis might be as follows :
Presuming spinners as backing devices for you OSDs and hence 64K
allocation unit (bluestore min_alloc_size_hdd param).
1) 1.48GB user objects result in 1.48G * 6 = 8.88G EC sha
Basically same thing that happens when you overwrite any object. New
data is sent from the client, and a new Head is created pointing at it.
The old head is removed, and the data marked for garbage collection if
it's unused (which it won't be, in this case, since another Head points
at it).
Thank you, that's very helpful, I appreciate it!
Zitat von Daniel Gryniewicz :
Basically same thing that happens when you overwrite any object.
New data is sent from the client, and a new Head is created pointing
at it. The old head is removed, and the data marked for garbage
collection
Thank you. !!
On Wed, Sep 9, 2020, 1:12 PM Konstantin Shalygin wrote:
> On 9/9/20 11:05 AM, Jayesh Labade wrote:
> > Do I need to perform any cleanup to delete benchmark data from osd ?
>
> Nope.
>
>
>
> k
>
>
___
ceph-users mailing list -- ceph-users@
Will do Matt
On Tue, Sep 8, 2020 at 5:36 PM Matt Benjamin wrote:
>
> thanks, Shubjero
>
> Would you consider creating a ceph tracker issue for this?
>
> regards,
>
> Matt
>
> On Tue, Sep 8, 2020 at 4:13 PM shubjero wrote:
> >
> > I had been looking into this issue all day and during testing foun
Hi there, I got a handicap problem on ceph orch which cannot found anywhere
from doc to solve this problem, I was removed an osd using ceph purge osd.27
(old style) and forgotten there are new way in Octopus. Now i got two osd
process via ceph orch ps, is it have a way to clean it up properly?
Hi Simon,
What about the idea of creating the cluster over two data centers?
Would it be possible to modify the crush map, so one pool gets
replicated over those two data centers and if one fails, the other
one would still be functional?
A stretched cluster is a valid approach, but you hav
Right, you can see the previously referenced ticket/bug in the link I had
provided. It's definitely not an unknown situation.
We have another one today:
debug 2020-09-09T06:49:36.595+ 7f570871d700 -1
bluestore(/var/lib/ceph/osd/ceph-123) _verify_csum bad crc32c/0x1000
checksum at blob offset
Hi,
I recently added 3 new servers to Ceph cluster. These servers use the H740p
mini raid card and I had to install the HWE kernel in Ubuntu 16.04 in order to
get the drives recognized.
We have a 23 node cluster and normally when we add OSDs they end up mounting
like this:
/dev/sde1 3.
I am going to attempt to answer my own question here and someone can correct me
if I am wrong.
Looking at a few of the other OSDs that we have replaced over the last year or
so it looks like they are mounted using tmpfs as well and that this is just a
result of switching from filestore to blues
What is your rgw_max_chunk_size? It looks like you'll get these
EDEADLK errors when rgw_max_chunk_size > rgw_put_obj_min_window_size,
because we try to write in units of chunk size but the window is too
small to write a single chunk.
On Wed, Sep 9, 2020 at 8:51 AM shubjero wrote:
>
> Will do Matt
That's right, radosgw doesn't do accounting per storage class. All you
have to go on is the rados-level pool stats for those storage classes.
On Mon, Sep 7, 2020 at 7:05 AM Tobias Urdin wrote:
>
> Hello,
>
> Anybody have any feedback or ways they have resolved this issue?
>
> Best regards
> _
Dear ceph folks,
I encoutered an interesting situation as follows: an old FC SAN is connected
two ceph OSD nodes, and its LUNs are used as virtual OSDs. When one node fails,
its LUN can be taken over by anther node. My question is, how to start up the
OSD on the new node without reconstructing
Igor,
Thanks for your reply. The object size is 4M and almost no overwrites
in the pool, why space loss happened in the pool?
I have another cluster with the same config, Its USED is almost equal to
1.5*STORED, the diff between them is:
The cluster has different OSD size(12T and 8T) .
Nor
Anyone else met the same problem? Using EC instead of Replica is to save
spaces, but now it's worse than replica...
On 9/9/2020 上午7:30, norman kern wrote:
Hi,
I have changed most of pools from 3-replica to ec 4+2 in my cluster, when I use
ceph df command to show
the used capactiy of the cluste
Hi,
I haven't done this myself yet but you should be able to simply move
the (virtual) disk to the new host and start the OSD, depending on the
actual setup. If those are stand-alone OSDs (no separate DB/WAL) it
shouldn't be too difficult [1]. If you're using ceph-volume you could
run 'ce
23 matches
Mail list logo