--tenant tnt2 --uid usery --display-name "tnt2-usery" \
--access_key "useryacc" --secret "test456" user create
Remember that to make use of this feature, you need recent librgw and
matching nfs-ganesha. In particular, Ceph should have, among other
changes:
comm
oblems
> with a multi user environment of RGW and nfs-ganesha.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Stre
] - data[-2][0])
ZeroDivisionError: float division by zero
[root@ceph1 ~]#
I could still figure out which OSD it was with systemctl, put I had to
purge the OSD before ceph osd status would run again. Is this normal
behaviour?
Cordially yours,
Benjamin
si
? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
>
> On Tue, Jul 9, 2019 at 1:26 PM Matt Benjamin wrote:
>>
>> Hi Harald,
>>
>> Please file a tracker issue, yes. (Delete
ling list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
tel. 734-821-5101
fax. 734-769-8938
cel. 734-
before sending the email.
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
>
> -Original Message-
> From: Matt Benjamin [mailto:mbenj.
er ends
> (until the summaries LinkedList overflows).
>
> Thoughts?
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director - Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
>
> ____
42f/Volume_Unknown_fbf0ea7a-af96-4dd4-9ad5-dbf6efdeefdc%24/20190430074414/0.cbrevision:get_obj:http
> status=206
> 2019-05-03 15:37:28.959 7f4a68484700 1 == req done req=0x55f2fde20970 op
> status=-104 http_status=206 ==
>
>
> -Mensaje original-
> De: EDH - Manuel Rios Fern
ts_per_shard": 55672,
>
> "fill_status": "OK"
>
> },
>
>
>
>
>
> We ‘realy don’t know who to solve that , looks like a timeout or slow
> performance for that bucket.
>
>
>
> Our RGW sect
gt; to recreate this as I have to hit the cluster very hard to get it to start
> lagging.
>
> Thanks, Aaron
>
> > On Apr 12, 2019, at 11:16 AM, Matt Benjamin wrote:
> >
> > Hi Aaron,
> >
> > I don't think that exists currently.
> >
> > Matt
&g
other words is it possible to
> > make sure to log this in stdout of the daemon "radosgw"?
>
> It seems to me impossible to put ops log in the stdout of the "radosgw"
> process (or, if it's possible, I have not found). So I have made a
> workaround. I have set:
&
tion or other use of this e-mail message
> or attachments is prohibited. If you have received this e-mail message in
> error, please delete and notify the sender immediately. Thank you.
>
> ___
> ceph-users mailing list
> ceph-u
248676.1.8
> > >
> > > I would assume then that unlike what documentation says, it's safe to
> > > run 'reshard stale-instances rm' on a multi-site setup.
> > >
> > > However it is quite telling if the author of this feature doesn't
> > > trust
Hi,
I'm getting an error when trying to use the APT repo for Ubuntu bionic.
Does anyone else have this issue? Is the mirror sync actually still in
progress? Or was something setup incorrectly?
E: Failed to fetch
https://download.ceph.com/debian-nautilus/dists/bionic/main/binary-amd64/Packages.bz2
Would you be willing to elaborate on what configuration specifically is bad?
That would be helpful for future reference.
Yes, we have tried to access with ceph-objectstore-tool to export the shard.
The command spits out the tcmalloc lines shown in my previous output and then
crashes with an 'Ab
Yes, sorry to misstate that. I was conflating with lifecycle
configuration support.
Matt
On Thu, Mar 14, 2019 at 10:06 AM Konstantin Shalygin wrote:
>
> On 3/14/19 8:58 PM, Matt Benjamin wrote:
> > Sorry, object tagging. There's a bucket tagging question in another thread
&g
Sorry, object tagging. There's a bucket tagging question in another thread :)
Matt
On Thu, Mar 14, 2019 at 9:58 AM Matt Benjamin wrote:
>
> Hi Konstantin,
>
> Luminous does not support bucket tagging--although I've done Luminous
> backports for downstream use, and woul
n this tags [1].
>
> ?
>
>
> Thanks,
>
> k
>
> [1] https://tracker.ceph.com/issues/24011
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/c
After restarting several OSD daemons in our ceph cluster a couple days ago, a
couple of our OSDs won’t come online. The services start and crash with the
below error. We have one pg marked as incomplete, and will not peer. The pool
is erasure coded, 2+1, currently set to size=3, min_size=2. The
nlineSkipList const&>::Insert
>
> So on one hand we want large buffers to avoid short lived data going
> into the DB, and on the other hand we want small buffers to avoid large
> amounts of comparisons eating CPU, especially in CPU limited environments.
>
>
> Mark
>
Sorry i mean L2
Am 12.03.19 um 14:25 schrieb Benjamin Zapiec:
> May I configure the size of WAL to increase block.db usage?
> For example I configure 20GB I would get an usage of about 48GB on L3.
>
> Or should I stay with ceph defaults?
> Is there a maximal size for WAL t
May I configure the size of WAL to increase block.db usage?
For example I configure 20GB I would get an usage of about 48GB on L3.
Or should I stay with ceph defaults?
Is there a maximal size for WAL that makes sense?
signature.asc
Description: OpenPGP digital signature
10GB wal.db?
>>
>> Has anyone done this before? Anyone who had sufficient SSD space
>> but stick with wal.db to save SSD space?
>>
>> If i'm correct the block.db will never be used for huge images.
>> And even though it may be used for one or two images
ocks from
it. After a while each VM should access the images pool less and
less due to the changes made in the VM.
Any thoughts about this?
Best regards
--
Benjamin Zapiec (System Engineer)
* GONICUS GmbH * Moehnestrasse 55 (Kaiserhaus) * D-59755 Arnsberg
* Tel.: +49 2932 916-0 * Fax: +49 2
The output has 57000 lines (and growing). I’ve uploaded the output to:
https://gist.github.com/zieg8301/7e6952e9964c1e0964fb63f61e7b7be7
Thanks,
Ben
From: Matthew H
Date: Wednesday, February 27, 2019 at 11:02 PM
To: "Benjamin. Zieglmeier"
Cc: "ceph-users@lists.ceph.com"
Hello,
We have a two zone multisite configured Luminous 12.2.5 cluster. Cluster has
been running for about 1 year, and has only ~140G of data (~350k objects). We
recently added a third zone to the zonegroup to facilitate a migration out of
an existing site. Sync appears to be working and runnin
he OSDs on SSD don't use db.wal or db.block.
The OSDs on HDD do have a separate db.block partition on
SSD (250GB - but db.block just contains about 5GB good question
why this is not used ;-) ).
Any suggestions on the high disk utilization by bstore_kv_sync?
Best regards
--
Benjamin Zap
>
> It is not possible to allow testuser to only write in folder2?
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Matt Benjamin
Red Hat, Inc.
315 West
g a look I think.)
>
> Cheers
> Florian
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Matt Benjamin
Red Hat, Inc.
315 West Huron S
ub.com/ceph/ceph/pull/23994
>
> Sooo... bit complicated, fix still pending.
>
> Cheers,
> Florian
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Matt Benjamin
R
ut
>> what...?
>>
>> I did naively try some "radosgw-admin bucket check [--fix]" commands
>> with no change.
>>
>> Graham
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph
r
>>0/ 5 objectcacher
>>0/ 5 client
>>1/ 5 osd
>>0/ 5 optracker
>>0/ 5 objclass
>>1/ 3 filestore
>>1/ 3 journal
>>0/ 5 ms
>>1/ 5 mon
>>0/10 monc
>>1/ 5 paxos
>>0/ 5 tp
>>
f something fishy is going
> on we can try opening a bug.
>
> Thank you.
>
> --
> Senior Software Engineer Red Hat Storage, Ann Arbor, MI, US
> IRC: Aemerson@OFTC, Actinic@Freenode
> 0x80F7544B90EDBFB9 E707 86BA 0C1B
, looking at the physical
> storage.
>
> Any ideas where to look next?
>
> thanks for all the help.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Str
, Sep 11, 2018 at 5:32 PM Benjamin Cherian <
> benjamin.cher...@gmail.com> wrote:
>
>> Ok, that’s good to know. I was planning on using an EC pool. Maybe I'll
>> store some of the larger kv pairs in their own objects or move the metadata
>> into it's own repli
r is there
some hidden cost to storing kv pairs in an EC pool I’m unaware of, e.g.,
does the kv data get replicated across all OSDs being used for a PG or
something?)
Thanks,
Ben
On Tue, Sep 11, 2018 at 1:46 PM Patrick Donnelly
wrote:
> On Tue, Sep 11, 2018 at 12:43 PM, Benjamin Cherian
> w
ory Farnum wrote:
> On Tue, Sep 11, 2018 at 7:48 AM Benjamin Cherian <
> benjamin.cher...@gmail.com> wrote:
>
>> Hi,
>>
>> I'm interested in writing a relatively simple application that would use
>> librados for storage. Are there recommendations for w
al data (a
relatively large FP array) as a binary blob (~3-5 MB).
Thanks,
Ben
--
Regards,
Benjamin Cherian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
I'm interested in writing a relatively simple application that would use
librados for storage. Are there recommendations for when to use the omap as
opposed to an xattr? In theory, you could use either a set of xattrs or an
omap as a kv store associated with a specific object. Are there
recomm
to clearing this error message?
>>
>>
>>
>> Regards,
>>
>> -Brent
>>
>>
>>
>> Existing Clusters:
>>
>> Test: Luminous 12.2.7 with 3 osd servers, 1 mon/man, 1 gateway ( all
>> virtual
>> )
>>
>> US Produ
uot;python_ceph.conf", rados_id="dms")
cluster.connect() # No exception when using keyring containing key for dms
user!
Regards,
Benjamin Cherian
On Sun, Aug 19, 2018 at 9:55 PM, Benjamin Cherian <
benjamin.cher...@gmail.com> wrote:
> Hi David,
>
> Thanks for the reply...
David Turner wrote:
> You are not specifying which user you are using. Your config file
> specifies the keyring, but it's still trying to use the default user admin.
> If you specify that in your python you'll be good to go.
>
> On Sun, Aug 19, 2018, 9:17 PM Benjamin Cheria
Hi,
I'm trying to write a simple test application using the Python 3 RADOS API.
I've made a separate keyring for my application with the same permissions
as the admin keyring, but I can't seem to use it to connect to my cluster.
The only keyring that seems to work is the client.admin keyring. Does
Hi Cody,
AFAIK, Ceph-ansible will not create separate partitions for the
non-collocated scenario (at least in the stable branches). Given, that
ceph-volume is now the recommended way of creating OSDs, you would want to
create all the logical volumes and volume groups you intend to use for
data, DB
hi wido,
after adding the hosts back to monmap the following error occurs in ceph-mon
log.
e5 ms_verify_authorizer bad authorizer from mon 10.111.73.3:6789/0
i tried to copy the mon key ring to all other nodes, but porblem still exists.
kind regards
Ben
> Benjamin Naber hat am 26. J
7/26/2018 11:50 AM, Benjamin Naber wrote:
> > hi Wido,
> >
> > got the folowing outputt since ive changed the debug setting:
> >
>
> This is only debug_ms it seems?
>
> debug_mon = 10
> debug_ms = 10
>
> Those two shoud be set where debug_mon will tell mor
o 0 30 bytes epoch 0) v1 60+0+0
(2547518125 0 0) 0x55aa46be4fc0 con 0x55aa46bc1000
2018-07-26 11:46:24.004954 7f819e167700 10 -- 10.111.73.1:6789/0 >>
10.111.73.3:0/1033315403 conn(0x55aa46bc1000 :6789 s=STATE_OPEN pgs=74 cs=1
l=1)._try_send sent bytes 9 remaining bytes 0
2018-07-26
hi Wido,
thx for your reply.
time is also in sync. i forced time sync again to be sure.
kind regards
Ben
> Wido den Hollander hat am 26. Juli 2018 um 10:18 geschrieben:
>
>
>
>
> On 07/26/2018 10:12 AM, Benjamin Naber wrote:
> > Hi together,
> >
> >
Hi together,
we currently have some problems with monitor quorum after shutting down all
cluster nodes for migration to another location.
mon_status gives uns the following outputt:
{
"name": "mon01",
"rank": 0,
"state": "electing",
"election_epoch": 20345,
"quorum": [],
"features": {
"r
/download/27863?v=t
>
> Kind regards,
>
> Caspar
>
> 2018-07-04 10:26 GMT+02:00 Benjamin Naber :
>
> > Hi @all,
> >
> > im currently in testing for setup an production environment based on the
> > following OSD Nodes:
> >
> > CE
Hi @all,
im currently in testing for setup an production environment based on the
following OSD Nodes:
CEPH Version: luminous 12.2.5
5x OSD Nodes with following specs:
- 8 Core Intel Xeon 2,0 GHZ
- 96GB Ram
- 10x 1,92 TB Intel DC S4500 connectet via SATA
- 4x 10 Gbit NIC 2 bonded via LACP f
land with number 2742969, whose registered
> office is 215 Euston Road, London, NW1 2BE.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suit
User_Id = key/secret>;
> Access_Key_Id = "";
> Secret_Access_Key = "";
> }
>
> RGW {
> cluster = "ceph";
> name = "client.radosgw.radosgw-s2";
> ceph_conf = "/etc/c
the redhat solution is still in progress, so i'm not sure if this even
> works. Thanks for any help,
>
> Josef
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Matt Benjamin
>
>> --
>> SWITCH
>> Valéry Tschopp, Software Engineer
>> Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
>> email: valery.tsch...@switch.ch phone: +41 44 268 1544
>>
>> 30 years of pioneering the Swiss Internet.
>> Celebrate with us at https://swit.ch/30years
t;0#",
>
> "usage": {
>
> "rgw.none": {
>
> "size_kb": 0,
>
> "size_kb_actual": 0,
>
> "num_objects": 0
>
> },
>
> "rgw.main": {
>
>
om
>>>
>>>
>>> The 'bucket list' command takes a user and prints the list of buckets
>>> they own - this list is read from the user object itself. You can remove
>>> these entries with the 'bucket unlink' command.
>>>
>>
mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
tel. 734-821-5101
fax. 734-769-8938
cel. 734-
ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>>
>>>
>>>
>&g
raG9mX29zaXJpc2FkbWluIiwia2V$
>
> # Secret_Access_Key =
> "eyJSR1dfVE9LRU4iOnsidmVyc2lvbiI6MSwidHlwZSI6ImxkYXAiLCJpZCI6ImJtZWVraG9mX29zaXJpc2FkbWluI$
> # Secret_Access_Key = "weW\/XGiHfcVhtH3chUTyoF+uz9Ldz3Hz";
>
> }
> ___
> ceph-users mailing list
> c
gt;>>> This e-mail message and any attachments are only for the use of the
>>>> intended recipient and may contain information that is privileged,
>>>> confidential or exempt from disclosure under applicable law. If you are not
>>>> the intended recipient,
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor
_rgw.example.com:8080
>
>
> I get access denied,
> then I try with the ldap key and I get the same problem.
> I created a local user out of curiosity and I put in s3cmd acess and secret
> and I could create a bucket. What am I doing wrong?
>
> __
Hi,
That's true, sure. We hope to support async mounts and more normal workflows
in future, but those are important caveats. Editing objects in place doesn't
work with RGW NFS.
Matt
- Original Message -
> From: "Gregory Farnum"
> To: "Matt Benjamin&q
> work; or this may work with a bit of effort. If this is possible, can this
> be achieved in a scalable manner to accommodate multiple (10s to 100s) users
> on the same system?
>
>
>
> I asked this question in #ceph and #ceph-devel. So far, there have not been
> replie
ellation async wrt completion? a cancellation in
that case could ensure that, if it succeeds, those guarantees are met, or else
fails (because the callback and completion have raced cancellation)?
Matt
>
> Yehuda
> --
> To unsubscribe from this list: send the line "unsubscri
>
> plato.xie
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
1-656-6238
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron St
ph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
he
> intended
> recipient(s) is prohibited. If you receive this e-mail in error, please
> notify the sender
> by phone or email immediately and delete it!
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technolo
ormance is by no means bad, we're just always greedy for more. :)
> )
>
> Thanks for any advice/suggestions!
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
Lembke wrote:
> Hi,
> see here:
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg15546.html
>
> Udo
>
>
> On 16.12.2014 05:39, Benjamin wrote:
>
> I increased the OSDs to 10.5GB each and now I have a different issue...
>
> cephy@ceph-admin0:~/ceph-clus
7050 MB / 28339 MB avail
64 active+clean
Any other commands to run that would be helpful? Is it safe to simply
manually create the "data" and "metadata" pools myself?
On Mon, Dec 15, 2014 at 5:07 PM, Benjamin wrote:
>
> Aha, excellent suggestion! I
Aha, excellent suggestion! I'll try that as soon as I get back, thank you.
- B
On Dec 15, 2014 5:06 PM, "Craig Lewis" wrote:
>
> On Sun, Dec 14, 2014 at 6:31 PM, Benjamin wrote:
>>
>> The machines each have Ubuntu 14.04 64-bit, with 1GB of RAM and 8GB of
>
o idea what the
heck is causing Ceph to complain.
Help? :(
~ Benjamin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
brid threadpool/epoll approach? That I
> suspect would be very effective at reducing context switching,
> especially compared to what we do now.
>
> Mark
>
> On 08/28/2014 10:40 PM, Matt W. Benjamin wrote:
> > Hi,
> >
> > There's also an early-stage TCP
> As for messenger level, I have some very early works on
> it(https://github.com/yuyuyu101/ceph/tree/msg-event), it contains a
> new messenger implementation which support different event mechanism.
> It looks like at least one more week to make it work.
>
--
Matt Benjamin
The
Hi,
I wasn't thinking of an interface to mark sockets directly (didn't know one
existed at the socket interface), rather something we might maintain, perhaps a
query interface on the server, or perhaps DBUS, etc.
Matt
- "Sage Weil" wrote:
> On Wed, 27 Aug 2014,
usible to have hb messengers identify themselves to a bus as
such,
that external tools (here, the ts scripts) could introspect?
Matt
--
Matt Benjamin
The Linux Box
206 South Fifth Ave. Suite 150
Ann Arbor, MI 48104
http://linuxbox.com
tel. 734-761-4689
fax. 734-
Hi Dmitry,
Will you please share with us how things went on the meeting?
Many thanks,
Benjamin
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Dmitry Borodaenko
> Sent: Wednesday, July 16, 2014 11:18 PM
> To: ceph-users
Hi Arne and James,
Ah, I misunderstood James' suggestion. Using bcache w/ SSDs can be another
viable alternative to SSD journal partitions indeed.
I think ultimately I will need to test the options since very few people have
experience with cache tiering or bcache.
Thanks,
Benjamin
From:
Hi James,
Yes, I've checked bcache, but as far as I can tell you need to manually
configure and register the backing devices and attach them to the cache device,
which is not really suitable to dynamic environment (like RBD devices for cloud
VMs).
Benjamin
> -Original
al to the server), and as
far as I understand the cache flush operations are happening in a coalesced
fashion.
Plus a definite advantage would be that besides functioning as a 'write log'
(aka. journal), the SSDs would be serving as a read cache for hot data.
What do you think?
C
Thanks JC.
- Ben
On Fri, Jun 27, 2014 at 5:05 PM, Jean-Charles LOPEZ
wrote:
> Hi Benjamin,
>
> code extract
>
> sync_all_users() erroring is the sync of user stats
>
> /*
> * thread, full sync all users stats periodically
> *
> * only sync non idle users or one
Hello Ceph users,
Has anyone seen a radosgw error like this:
2014-06-27 14:02:39.254210 7f06b11587c0 0 ceph version 0.80.1
(a38fe1169b6d2ac98b427334c12d7cf81f809b74), process radosgw, pid 15471
2014-06-27 14:02:39.341198 7f06955ea700 0 ERROR: can't get key: ret=-2
2014-06-27 14:02:39.341212 7f0
ks that include what its latency and variances are, the DC 3700s
> deliver their IOPS without any stutters.
>
The eMLC version of the OCZ Deneva 2 didn't perform that well during stress
test, the actual results were much below the expected:
http://www.storagereview.com/ocz_deneva_2_ente
bandwidth.
The 400GB Intel S3700 is a lot more faster but double the price (around $950)
compared to the 200GB. Maybe I would be better off using enterprise SLC SSDs
for journals?
For example OCZ Deneva 2 C 60GB SLC costs around $640, and have 75K write IOPS
and ~510MB/s write bandwidth by sp
Hi,
We are at the end of the process of designing and purchasing storage to provide
Ceph based backend for VM images, VM boot (ephemeral) disks, persistent volumes
(and possibly object storage) for our future Openstack cloud. We considered
many options and we chose to prefer commodity storage s
but I prefere an "official" support with IB integrated in main ceph
> repo
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Matt Benjamin
The Linux Box
206 So
Hi,
I should have been careful. Our efforts are aimed at Giant. We're
serious about meeting delivery targets. There's lots of shakedown, and
of course further integration work, still to go.
Regards,
Matt
- "Gandalf Corvotempesta" wrote:
> 2014-05-01 0:20 GMT
list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Matt Benjamin
The Linux Box
206 South Fifth Ave. Suite 150
Ann Arbor, MI 48104
http://linuxbox.com
tel. 734-761-4689
fax. 734-769-8938
cel. 734-216-5309
91 matches
Mail list logo