Hi All,
Any update in this from any one?
On Tue, Jul 28, 2020 at 4:00 PM sathvik vutukuri <7vik.sath...@gmail.com>
wrote:
> Hi All,
>
> radosgw-admin is configured in ceph-deploy, created a few buckets from the
> Ceph dashboard, but when accessing through Java AWS S3 code to create a new
> bucke
Den ons 29 juli 2020 kl 03:17 skrev David Orman :
> That's what the formula on the ceph link arrives at, a 2/3 or 66.66%
> overhead. But if a 4 byte object is split into 4x1 byte chunks data (4
> bytes total) + 2x 1 byte chunks parity (2 bytes total), you arrive at 6
> bytes, which is 50% more tha
It's maybe a dns issue, I guess.
sathvik vutukuri <7vik.sath...@gmail.com> 于2020年7月29日周三 下午3:21写道:
> Hi All,
>
> Any update in this from any one?
>
> On Tue, Jul 28, 2020 at 4:00 PM sathvik vutukuri <7vik.sath...@gmail.com>
> wrote:
>
> > Hi All,
> >
> > radosgw-admin is configured in ceph-deploy
This works for me (the code switches between AWS and RGW according to
whether s3Endpoint is set). You need the pathStyleAccess unless you have
wildcard DNS names etc.
String s3Endpoint = "http://my.host:80";;
AmazonS3ClientBuilder s3b = AmazonS3ClientBuilder.standard ()
Thanks, I'll check it out.
On Wed, 29 Jul 2020, 13:35 Chris Palmer,
wrote:
> This works for me (the code switches between AWS and RGW according to
> whether s3Endpoint is set). You need the pathStyleAccess unless you have
> wildcard DNS names etc.
>
> String s3Endpoint = "http://my.h
Hi,
I'm trying to have clients read the 'rbd_default_data_pool' config
option from the config store when creating a RBD image.
This doesn't seem to work and I'm wondering if somebody knows why.
I tried:
$ ceph config set client rbd_default_data_pool rbd-data
$ ceph config set global rbd_defa
Aren't you just looking at the same thing from two different perspective?
In one case you say: I have 100% of useful data, and I need to add 50% of
parity for a total of 150% raw data.
In the other, you say: Out of 100% of raw data, 2/3 is useful data, 1/3 is
parity, which gives you your 33.3%
Hi Frank,
you might want to proceed with perf counters' dump analysis in the
following way:
For 2-3 arbitrary osds
- save current perf counter dump
- reset perf counters
- leave OSD under the regular load for a while.
- dump perf counters again
- share both saved and new dumps and/or chec
Hi All,
I'm kind of crossposting this from here:
https://forum.proxmox.com/threads/i-o-wait-after-upgrade-5-x-to-6-2-and-ceph-luminous-to-nautilus.73581/
But since I'm more and more sure that it's a ceph problem I'll try my
luck here.
Since updating from Luminous to Nautilus I have a big prob
On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote:
>
> Hi,
>
> I'm trying to have clients read the 'rbd_default_data_pool' config
> option from the config store when creating a RBD image.
>
> This doesn't seem to work and I'm wondering if somebody knows why.
It looks like all string-based
On 29/07/2020 14:54, Jason Dillaman wrote:
On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote:
Hi,
I'm trying to have clients read the 'rbd_default_data_pool' config
option from the config store when creating a RBD image.
This doesn't seem to work and I'm wondering if somebody knows
On 29/07/2020 14:52, Raffael Bachmann wrote:
Hi All,
I'm kind of crossposting this from here:
https://forum.proxmox.com/threads/i-o-wait-after-upgrade-5-x-to-6-2-and-ceph-luminous-to-nautilus.73581/
But since I'm more and more sure that it's a ceph problem I'll try my
luck here.
Since up
On Wed, Jul 29, 2020 at 9:03 AM Wido den Hollander wrote:
>
>
>
> On 29/07/2020 14:54, Jason Dillaman wrote:
> > On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote:
> >>
> >> Hi,
> >>
> >> I'm trying to have clients read the 'rbd_default_data_pool' config
> >> option from the config store w
Hi Wido
Thanks for the quick answer. They are all Intel p3520
https://ark.intel.com/content/www/us/en/ark/products/88727/intel-ssd-dc-p3520-series-2-0tb-2-5in-pcie-3-0-x4-3d1-mlc.html
And this is ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
n
Hi Raffael,
Adam made a PR this year that shards rocksdb data across different
column families to help reduce compaction overhead. The goal is to
reduce write-amplification during compaction by storing multiple small
LSM hierarchies rather than 1 big one. We've seen evidence that this
lowe
Hi Mark
Unfortunately it is the production cluster and I don't have another one :-(
This is the output of the log parser. I'have nothing to compare them to.
Stupid me has no more logs from before the upgrade.
python ceph_rocksdb_log_parser.py ceph-osd.1.log
Compaction Statistics ceph-osd.1.
Fix technical breakdown of all your electronics and appliances at Geek Squad
Support. Reach the certified experts at Geek Squad Support for fixing any kind
of technical bug with your devices. Best of services and assistance assured at
support.
https://geekstechs.org/geek-squad-support/
Remove th
Don't know how to change toner cartridge on your sharp printer? Check our this
step by step guide to learn how to replace toner in sharp printer
https://printersetup.org/change-toner-in-sharp-printer/
___
ceph-users mailing list -- ceph-users@ceph.io
To
On Wed, Jul 29, 2020 at 9:07 AM Jason Dillaman wrote:
>
> On Wed, Jul 29, 2020 at 9:03 AM Wido den Hollander wrote:
> >
> >
> >
> > On 29/07/2020 14:54, Jason Dillaman wrote:
> > > On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote:
> > >>
> > >> Hi,
> > >>
> > >> I'm trying to have client
Frank,
so you have pretty high amount of small writes indeed. More than a half
of the written volume (in bytes) is done via small writes.
And 6x times more small requests.
This looks pretty odd for sequential write pattern and is likely to be
the root cause for that space overhead.
I can
Hi,
Thank you, everyone, for the help. I absolutely was mixing up the two,
which is why I was asking for guidance. The example made it clear. The
question I was trying to answer was: what would the capacity of the cluster
be, for actual data, based on the raw disk space + server/drive count +
eras
Hi Raffael,
wondering if all OSDs are suffering from slow compaction or just he one
which is "near full"?
Do other OSDs has that "log_latency_fn slow operation observed for" lines?
Have you tried "osd bench" command for your OSDs? Does it show similar
numbers for every OSD?
You might want
On 29/07/2020 16:00, Jason Dillaman wrote:
On Wed, Jul 29, 2020 at 9:07 AM Jason Dillaman wrote:
On Wed, Jul 29, 2020 at 9:03 AM Wido den Hollander wrote:
On 29/07/2020 14:54, Jason Dillaman wrote:
On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote:
Hi,
I'm trying to have cl
Wow, that's crazy. You only had 13 compaction events for that OSD over
roughly 15 days but the average compaction time was 116 seconds! Notice
too though that the average compaction output size is 422MB with an
average output throughput of 3.5MB! That's really slow with RocksDB
sitting on an
Den ons 29 juli 2020 kl 16:34 skrev David Orman :
> Thank you, everyone, for the help. I absolutely was mixing up the two,
> which is why I was asking for guidance. The example made it clear. The
> question I was trying to answer was: what would the capacity of the cluster
> be, for actual data, b
Dear Igor,
please find below data from "ceph osd df tree" and per-OSD bluestore stats
pasted together with the script for extraction for reference. We have now:
df USED: 142 TB
bluestore_stored: 190.9TB (142*8/6 = 189, so matches)
bluestore_allocated: 275.2TB
osd df tree USE: 276.1 (so matches w
Hello,
yesterday i did:
ceph osd purge 32 --yes-i-really-mean-it
I also started to upgrade:
ceph orch upgrade start --ceph-version 15.2.4
It seems its really gone:
ceph osd crush remove osd.32 => device 'osd.32' does not appear in
the crush map
ceph orch ps:
osd.32ceph01
Hi Igor,
thanks! Here a sample extract for one OSD, time stamp (+%F-%H%M%S) in file
name. For the second collection I let it run for about 10 minutes after reset:
perf_dump_2020-07-29-142739.osd181:"bluestore_write_big": 10216689,
perf_dump_2020-07-29-142739.osd181:"bluestore_wri
Jason,
The family and I are doing well, thanks for asking. I haven't worked
with Octopus yet, so I can't really talk towards that. Ceph
historically hasn't cared about physical disk layout, and personally I
think the Ceph code path is too heavy to really worry about
optimizations there. The LVM la
Hi Robin,
Thanks for the reply. I'm currently testing this on a bucket with a single
object, on a Ceph cluster with a very tiny amount of data.
I've done what you suggested and run the `radosgw-admin lc process` command and
turned up the RGW logs - but I saw nothing.
[qs-admin@portala0 ceph]
On 29/07/2020 16:54, Wido den Hollander wrote:
On 29/07/2020 16:00, Jason Dillaman wrote:
On Wed, Jul 29, 2020 at 9:07 AM Jason Dillaman
wrote:
On Wed, Jul 29, 2020 at 9:03 AM Wido den Hollander
wrote:
On 29/07/2020 14:54, Jason Dillaman wrote:
On Wed, Jul 29, 2020 at 6:23 AM Wido d
cephadm will handle the LVM for you when you deploy using an OSD
specification. For example, we have NVME and rotational drives, and cephadm
will automatically deploy servers with the DB/WAL on NVME and the data on
the rotational drives, with a limit of 12 rotational per NVME - it handles
all the L
Hi Igor
Thanks for you answer. All the disks had low latancy warnings. "had"
because I think the problem is solved.
After moving some data and almost losing the nearfull nvme pool, because
one disk had so much latency that ceph decided to mark it out, I could
start destroying and recreating ea
Hi Mark
I think its 15 hours not 15 days. But the compaction time seems really
to be slow. I' destroying and recreating all nvme osds one by one. And
the recreated ones don't have latency problems and are also much faster
compacting the disk.
This is since two hours:
Compaction Statistics
On 7/29/20 7:47 PM, Raffael Bachmann wrote:
Hi Mark
I think its 15 hours not 15 days. But the compaction time seems really
to be slow. I' destroying and recreating all nvme osds one by one. And
the recreated ones don't have latency problems and are also much
faster compacting the disk.
Thi
People who frequently deal with emails in Outlook must be aware of a common
issue. That is we won’t be able to reply with original attachments. This
situation does result in multitudinous troubles. For instance, if in reply
we’ve put forward some errors about original attachments, recipients who
Hi Chris,
Thanks for the info. Code worked for me with pathstyleaccess with out DNS
issues.
BasicAWSCredentials awsCreds = new
BasicAWSCredentials("uiuiusidusiyd898798798",
"HJHGGyugyuyudfyGJHGYGIYIGU");
AmazonS3ClientBuilder s3b = AmazonS3ClientBuilder.standard();
s3b.setEndpointConfiguration(n
37 matches
Mail list logo