the setting while
the OSD is down.
During benchmarks on raw disks I just switched cache on and off when I needed.
There was nothing running on the disks and the fio benchmark is destructive any
ways.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
the OSD is started. Why and how
else would one want this to happen?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
disk_activate" && -n "${OSD_DEVICE}" ]] ; then
echo "Disabling write cache on ${OSD_DEVICE}"
/usr/sbin/smartctl -s wcache=off "${OSD_DEVICE}"
fi
This works for both, SAS and SATA drives and ensures that write cache is
disabled before an OSD daemon st
worst-case situations.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Bastiaan
Visser
Sent: 17 January 2020 06:55:25
To: Dave Hall
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-
Is this issue now a no-go for updating to 13.2.7 or are there only some
specific unsafe scenarios?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Dan van der
Ster
Sent: 03 December
ing.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of John Hearns
Sent: 24 October 2019 08:21:47
To: ceph-users
Subject: [ceph-users] Erasure coded pools on Ambedded - advice please
I am se
e and what compromises are you willing
to make with regards to sleep and sanity.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Salsa
Sent: 21 October 2019 17:31
To: Martin Verges
Cc: ceph-use
ord: "!"
comment: "ceph-container daemons"
uid: 167
group: ceph
shell: "/sbin/nologin"
home: "/var/lib/ceph"
create_home: no
local: yes
state: present
system: yes
This should err if a group and user ceph already exist with IDs
On Centos7, the option "secretfile" requires installation of ceph-fuse.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Yan, Zheng
Sent: 07 August 2019 10:10:19
or path I tried different versions
Any help an this would be appreciated.
Frank
Frank Rothenstein
Systemadministrator
Fon: +49 3821 700 125
Fax: +49 3821 700 190Internet: www.bodden-kliniken.de
E-Mail: f.rothenst...@bodden-kliniken.de
_
BODD
pshots due to a not yet fixed bug; see this
thread: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg54233.html
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Robert Ruge
Sen
node) against a running cluster with mons in quorum.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Oscar Segarra
Sent: 15 July 2019 11:55
To: ceph-users
Subject: [ceph-users] What if
being powers of 2.
Yes, the 6+2 is a bit surprising. I have no explanation for the observation. It
just seems a good argument for "do not trust what you believe, gather facts".
And to try things that seem non-obvious - just to be sure.
Best regards,
=
Frank Schilde
fig, kernel
parameters etc, etc. One needs to test what one has.
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Lars
Marowsky-Bree
Sent: 11 July 2019 10:14:04
To: ceph-users@lists.ceph.com
integer.
alloc_size should be an integer multiple of object_size/k.
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: 09 July 2019 09:22
To: Nathan Fish; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] What
_size=object_size/k. Coincidentally, for
spinning disks this also seems to imply best performance.
If this is wrong, maybe a disk IO expert can provide a better explanation as a
guide for EC profile choices?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, ru
e works well for the majority of our use cases. We
can still build small expensive pools to accommodate special performance
requests.
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of David
Sent
Dear Yan, Zheng,
does mimic 13.2.6 fix the snapshot issue? If not, could you please send me a
link to the issue tracker?
Thanks and best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Yan, Zheng
Sent: 20 May 2019
Typo below, I meant "I doubled bluestore_compression_min_blob_size_hdd ..."
____
From: Frank Schilder
Sent: 20 June 2019 19:02
To: Dan van der Ster; ceph-users
Subject: Re: [ceph-users] understanding the bluestore blob, chunk and
compression para
replicated pools, the aggregated IOPs might be heavily affected. I have,
however, no data on that case.
Hope that helps,
Frank
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Dan van der
Ster
Sent: 20
Please ignore the message below, it has nothing to do with ceph.
Sorry for the spam.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Frank
Schilder
Sent: 17 June 2019 20:33
To: ceph
stable)
I can't see anything unusual in the logs or health reports.
Thanks for your help!
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo
ng crush
rules to adjust locations of pools, etc.
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
nly be providing CephFS, fairly large
> > files, and will use erasure encoding.
> >
> > many thanks for any advice,
> >
> > Jake
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-
gh-network-load scheduled tasks on your machines (host or VM) or
somewhere else affecting relevant network traffic (backups etc?)
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Marc Roos
Se
that cannot be questioned by a single OSD
trying to mark itself as in.
At least the only context I have heard of OSD flapping was in connection to
2/1-pools. I have never seen such a report for, say, 3/2 pools. Am I
overlooking something here?
Best regards,
=
Frank Schilder
AIT Risø
Dear Maged,
thanks for elaborating on this question. Is there already information in which
release this patch will be deployed?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
ceph-users mailing list
ceph-users
10/18/surviving-a-ceph-cluster-outage-the-hard-way/
. You will easily find more. The deeper problem here is called "split-brain"
and there is no real solution to it except to avoid it at all cost.
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygni
an interesting feature? Is there any reason for not
remapping all PGs (if possible) prior to starting recovery? It would eliminate
the lack of redundancy for new writes (at least for new objects).
Thanks again and best regards,
=====
Frank Schilder
AIT Risø Campus
Bygni
sion. Either min_size=k is safe or not. If
it is not, it should never be used anywhere in the documentation.
I hope I marked my opinions and hypotheses clearly and that the links are
helpful. If anyone could shed some light on as to why exactly min_size=k+1 is
important, I would be grateful.
Best r
Dear Yan,
thank you for taking care of this. I removed all snapshots and stopped snapshot
creation.
Please keep me posted.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Yan, Zheng
Sent: 20 May 2019 13:34:07
00 PGs per OSD. I actually plan to give the cephfs
a bit higher share for performance reasons. Its on the list.
Thanks again and have a good weekend,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Stefan Kooman
Sent: 18 May 201
versioned encoding,6=dirfrag is stored in omap,8=no
anchor table,9=file layout v2,10=snaprealm v2}
Sorry, I should have checked this first.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
0b~1,10d~1,10f~1,111~1]
The relevant pools are con-fs-meta and con-fs-data.
Best regards,
Frank
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
[root@ceph-08 ~]# cat /etc/tuned/ceph/tuned.conf
[main]
summary=Settings for ceph cluster. Derived from throughput-performance.
inc
be, keeping in mind that we are in a pilot production phase already and
need to maintain integrity of user data?
Is there any counter showing if such operations happened at all?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
single-file-read load on it.
I hope it doesn't take too long.
Thanks for your input!
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Yan, Zheng
Sent: 16 May 2019 09:35
To: Frank Schilder
Subject: Re: [ceph-users] mimic
relevant if multiple MDS daemons are active on a file system.
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Yan, Zheng
Sent: 16 May 2019 05:50
To: Frank Schilder
Cc: Stefan Kooman; ceph-users@lists.ceph.com
Subject: Re: [ceph
"time": "2019-05-15 11:38:36.511381",
"event": "header_read"
},
{
"time": "2019-05-15 11:38:36.511383",
"event": "throttled"
ser running the benchmark. Only IO to particular files/a particular
directory stopped, so this problem seems to remain isolated. Also, the load on
the servers was not high during the test. The fs remained responsive to other
users. Also, the MDS daemons never crashed. There was no fail-over e
helps,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Rhian Resnick
Sent: 16 November 2018 16:58:04
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Checking cephfs compression is working
How do you confirm
forgive me, it's my mistake - -
On Sat, Mar 23, 2019 at 4:28 PM Frank Yu wrote:
> Hi guys,
>
> I have try to setup a cluster with this version, I found the mgr
> prometheus metrics has been changed a lot compared with version 13.2.x.
> e.g: there is no ceph_mds_* related
gt;> ceph-users mailing list
>> > >> ceph-users@lists.ceph.com
>> > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> > > ___
>> > > ceph-users mailing list
>> > > ceph-users@lists.ceph.com
>> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Regards
Frank Yu
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Ragan, Tj
(Dr.)
Sent: 14 March 2019 11:22:07
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] bluestore compression enabled but no
s return zero
> and this will lead to the error message.
>
> I have set nodeep-scrub and i am waiting for 12.2.11.
>
> Thanks
> Christoph
>
> On Fri, Dec 21, 2018 at 03:23:21PM +0100, Hervé Ballans wrote:
> > Hi Frank,
> >
> > I encounter exactly the same iss
daily? Can the errors
possibly be due to deep scrubbing too aggressively?
I realize these errors indicate potential failing drives but I can't
replace a drive daily.
thx
Frank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.cep
7.36 deep-scrub 1 errors
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
(see question marks
in table above, what is the resulting mode?).
What I would like to do is enable compression on all OSDs, enable compression
on all data pools and disable compression on all meta data pools. Data and meta
data pools might share OSDs in the future. The above ta
tore_compressed_original=0.04 or
bluestore_compressed_allocated/bluestore_compressed_original=0.5? The second
ratio does not look too impressive given the file contents.
4) Is there any way to get uncompressed data compressed as a background task
like scrub?
If you have the time to look at thes
ou know.
Thanks and have a nice weekend,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: David Turner
Sent: 12 October 2018 16:50:31
To: Frank Schilder
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] bluestore compression enabled but n
ssion happening. If
you know about something else than "ceph osd pool set" - commands, please let
me know.
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: David Turner
Sent: 12 October 2018 15:47:20
To:
possibly provide a source or sample
commands?
Thanks and best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: David Turner
Sent: 09 October 2018 17:42
To: Frank Schilder
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users
John Spray wrote:
> On Fri, Sep 28, 2018 at 2:25 PM Frank (lists) wrote:
>>
>> Hi,
>>
>> On my cluster I tried to clear all objects from a pool. I used the
>> command "rados -p bench ls | xargs rados -p bench rm". (rados -p bench
>> cleanup doe
g the object is in, but the problem
persists. What causes this?
I use Centos 7.5 with mimic 13.2.2
regards,
Frank de Bot
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
00
All as it should be, except for compression. Am I overlooking something?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi all,
I was wondering if anyone out the increase the value for
bluestore_prefer_deferred_size
to effectively defer all writes.
If so, did you experience any unforeseen side effects?
thx
Frank
___
ceph-users mailing list
ceph-users@lists.ceph.com
"ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0)
luminous (stable)": 79
}
}
On Sat, Sep 15, 2018 at 10:45 PM Paul Emmerich
wrote:
> Well, that's not a lot of information to troubleshoot such a problem.
>
> Please post the output of the following command
n MB/s.
Is there any way to fix the unclean pg quickly?
--
Regards
Frank Yu
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi there,
Any plan for the release of 13.2.1?
--
Regards
Frank Yu
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I session at any time.
Would an of those 2 options be possible on the ceph iscsi gateway
solution to configure?
Regards,
Frank
Jason Dillaman wrote:
> Conceptually, I would assume it should just work if configured correctly
> w/ multipath (to properly configure the ALUA settings on the LUNs).
On Tue, Jun 26, 2018 at 6:06 PM Frank de Bot (lists)
mailto:li...@searchy.net>> wrote:
Hi,
In my test setup I have a ceph iscsi gateway (configured as in
http://docs.ceph.com/docs/luminous/rbd/iscsi-overview/ )
I would like to use thie with a FreeBSD (11.1) initiat
with this gateway setup?
Regards,
Frank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Frank (lists) wrote:
> Hi,
>
> On a small cluster (3 nodes) I frequently have slow requests. When
> dumping the inflight ops from the hanging OSD, it seems it doesn't get a
> 'response' for one of the subops. The events always look like:
>
I've done some
ommit_rec from 18"
}
]
The OSD id's are not the same. Looking at osd.20, the OSD process runs,
it accepts requests ('ceph tell osd.20 bench' runs fine). When I restart
the process for the OSD, the requests is completed.
I could no
does iscsi perform compared to krbd? I've already did some
benchmarking, but it didn't performed any near what krbd is doing. krbd
easily saturates the public netwerk, iscsi about 75%. Tmcu-runner is
running during a benchmark at a load of 50 to 75% on the (owner)target
Re
Just having reliable hardware isn’t enough for monitor failures. I’ve had a
case where a wrongly typed command
Brought down all three monitors via segfault and no way to bring them back
since the command caused the monitor
Database to be corrupt. I wish there was a checkpoint implemented in the
and error?
thx
Frank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
?
Would love to hear some actual numbers from users.
thx
Frank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
here https://ceph.com/pgcalc/) along with
pools for Kubernetes and RGW.
2. Define a single block storage pool (to be used by OpenStack and
Kubernetes) and an object pool (for RGW).
I am not sure how much space each component will require at this time.
thx
Frank
Running ceph 12.2.2 in Centos 7.4. The cluster was in healthy condition until
a command caused all the monitors to crash.
Applied a private build for fixing the issue (thanks !)
https://tracker.ceph.com/issues/22847
the monitors are all started, and all the OSDs are reported as been up in ceph
Thanks, I’m downloading it right now
--
Efficiency is Intelligent Laziness
From: "ceph.nov...@habmalnefrage.de"
Date: Friday, February 2, 2018 at 12:37 PM
To: "ceph.nov...@habmalnefrage.de"
Cc: Frank Li , "ceph-users@lists.ceph.com"
Subject: Aw: Re: [ceph-use
Sure, please let me know where to get and run the binaries. Thanks for the fast
response !
--
Efficiency is Intelligent Laziness
On 2/2/18, 10:31 AM, "Sage Weil" wrote:
On Fri, 2 Feb 2018, Frank Li wrote:
> Yes, I was dealing with an issue where OSD are not peerin
b47f9427c6c97e2144b094b7e5ba) luminous
(stable)
--
Efficiency is Intelligent Laziness
On 2/2/18, 9:45 AM, "Sage Weil" wrote:
On Fri, 2 Feb 2018, Frank Li wrote:
> Hi, I ran the ceph osd force-create-pg command in luminious 12.2.2 to
recover a failed pg, and it
>
Hi, I ran the ceph osd force-create-pg command in luminious 12.2.2 to recover a
failed pg, and it
Instantly caused all of the monitor to crash, is there anyway to revert back to
an earlier state of the cluster ?
Right now, the monitors refuse to come up, the error message is as follows:
I’ve file
Am 20.11.2017 um 15:10 schrieb Jason Dillaman:
Recommended way to do what, exactly? If you are attempting to rename
the target while keeping all other settings, at step (3) you could use
"rados get" to get the current config, modify it, and then "rados put"
to uploaded before continuing to step
e gateway.conf from rbd pool 'rados -p rbd rm gateway.conf'
4. Start the iSCSI gateway on all nodes 'systemctl start rbd-target-api'
Is this the recommended way?
Thank you
Frank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
how can I rename an iscsi target_iqn?
And where is the configuration that I made with gwcli stored?
Thank you
Frank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
John,
I tried to write some data to the new created files, it failed, just as you
said.
Thanks very much.
On Thu, Oct 12, 2017 at 6:20 PM, John Spray wrote:
> On Thu, Oct 12, 2017 at 11:12 AM, Frank Yu wrote:
> > Hi,
> > I have a ceph cluster with three nodes, and I have a c
n on pool
cephfs_data, this mean, I should can't write data under mountpoint
/mnt/ceph/?? or I'm wrong ?
thanks
--
Regards
Frank Yu
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
We have made sure that the key,ceph user ,ceph admin keys are correct.
could you let us know if there is any other possibility that would mess
up the integration.
Regards,
Frank
On 03/06/2017 01:22 PM, Wido den Hollander wrote:
Op 6 maart 2017 om 6:26 schreef frank :
Hi,
We have
and jewel as its ceph version.
Any help will be greatly appreciated.
Regards,
Frank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
se let me know the details about the ceph
installation steps that I should follow to trouble shoot this issue.
Regards,
Frank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
"bucket_index_max_shards": 0,
"read_only": "false"
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"defau
up set --rgw-zonegroup=default <mailto:owass...@redhat.com>
Date: 26 July 2016 at 12:32:58
To: Frank Enderle <mailto:frank.ende...@anamica.de>
Cc: ceph-users@lists.ceph.com
<mailto:ceph-users@lists.ceph.com>, Shilpa
Manjarabad Jagannath <mailto:smanj...@redhat.com>
Subj
732357
Geschäftsführer: Yvonne Holzwarth, Frank Enderle
From: Orit Wasserman <mailto:owass...@redhat.com>
Date: 26 July 2016 at 12:13:21
To: Frank Enderle <mailto:frank.ende...@anamica.de>
Cc: ceph-users@lists.ceph.com
<mailto:ceph-users@lists.ceph.com>, Shilpa
Manjarabad Jagan
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": ""
}
and
radosgw-admin -
Heppacher Str. 39
71404 Korb
Telefon: +49 7151 1351565 0
Telefax: +49 7151 1351565 9
E-Mail: frank.ende...@anamica.de
Internet: www.anamica.de
Handelsregister: AG Stuttgart HRB 732357
Geschäftsführer: Yvonne Holzwarth, Frank Enderle
From: Orit Wasserman <mailto:owass...@redhat.com>
Da
It most certainly looks very much like the same problem.. Is there a way to
patch the configuration by hand to get the cluster back in a working state?
--
From: Shilpa Manjarabad Jagannath
<mailto:smanj...@redhat.com>
Date: 25 July 2016 at 10:34:42
To: Frank Enderle <mailto:f
t-placement",
"val": {
"index_pool": ".rgw.buckets.index",
"data_pool": ".rgw.buckets",
"data_extra_pool": ".rgw.buckets.extra",
"index
mixed up with the zone/zonegroup stuff
during the update.
Would be somebody able to take a look at this? I'm happy to provide all the
required files; just name them.
Thanks,
Frank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello John,
that was the info i missed (both - create pools and fs). Works now.
Thank you very much.
Kind regards
Petric
> -Original Message-
> From: John Spray [mailto:jsp...@redhat.com]
> Sent: Montag, 21. September 2015 14:41
> To: Frank, Petric (Petric)
>
Hello,
i'm facing a problem that mds seems not to start.
I started mds in debug mode "ceph-mds -f -i storage08 --debug_mds 10" which
outputs in the log:
-- cut -
2015-09-21 14:12:14.313534 7ff47983d780 0 ceph version 0.94.3
(95cefea9fd9ab740
If you don't need LACP you could use round-robin bonding mode.
With 4x1Gbit NICs you can get a bandwidth of 4Gbit per TCP connection.
Either create trunks on stacked switches (e.g. Avaya) or use single
switches (e.g. HP 1810-24) and a locally managed MAC address per node/bond.
The latter is some
Specialist , Storage Platforms
> CSC - IT Center for Science,
> Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland
> mobile: +358 503 812758
> tel. +358 9 4572001
> fax +358 9 4572302
> http://www.csc.fi/
> ****
>
_______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Regards
Frank Yu
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
h-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Regards
Frank Yu
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Tue, Dec 2, 2014 at 12:42 PM, Gregory Farnum wrote:
>
> On Tue, Dec 2, 2014 at 10:55 AM, Ken Dreyer wrote:
> > On 12/02/2014 10:59 AM, Gregory Farnum wrote:
> >> We aren't currently doing any of the ongoing testing which that page
> >> covers on CentOS 7. I think that's because it's going to f
ons/
It's absence is currently causing great amounts of consternation in a
discussion about using and deploying Ceph in an environment I deal with and
I'm curious if there are any particular reasons it's absent from the list.
Thanks,
Frank
__
Hi,
Is anyone help me to resolve the error as follows ? Thank a lot's.
rest-bench --api-host=172.20.10.106 --bucket=test
--access-key=BXXX --secret=z
--protocol=http --uri_style=path --concurrent-ios=3 --block-size=4096 write
host=172.20.10.106
ERROR: failed to c
1.
Is there any one has the answer for this error?
2.
rest-bench --api-host=s3-website-us-east-1.amazonaws.com
--bucket=frank-s3-test --access-key=XXX
--secret=IzuCXXXDDObLU --block-size=8 --protocol=http
--uri_style=path write
3.
host=s3
Hi,
Is anyone help me to resolve the error as follows ? Thank a lot's.
rest-bench --api-host=172.20.10.106 --bucket=test
--access-key=BXXX --secret=z
--protocol=http --uri_style=path --concurrent-ios=3 --block-size=4096 write
host=172.20.10.106
ERROR: failed to c
100 matches
Mail list logo