the issue is relevant to any
TCP connection.
James
Zynstra is a private limited company registered in England and Wales
(registered number 07864369). Our registered office and Headquarters are at The
Innovation Centre, Broad Quay, Bath, BA1 1UD. This email, its contents and any
attachments are
320 KiB 1 0 5 0
01 0 0 B 1 4 KiB0 B 0 B
Thanks and regards,
James.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Sun, Jul 7, 2019 at 10:30 PM Kai Stian Olstad
wrote:
> On 06.07.2019 16:43, Ashley Merrick wrote:
> > Looking at the possibility of upgrading my personal storage cluster from
> > Ubuntu 18.04 -> 19.04 to benefit from a newer version of the Kernel e.t.c
>
> For a newer kernel install HWE[1], at
Hi all,
Just want to (double) check something – we’re in the process of luminous ->
mimic upgrades for all of our clusters – particularly this section regarding
MDS steps
• Confirm that only one MDS is online and is rank 0 for your FS:
# ceph status
• Upgrade the last
0 8.86TiB 64.57 4.86TiB 2637530
cephfs_data 1 2.59TiB 34.72 4.86TiB 25341863
cephfs_metadata 2 281GiB 5.34 4.86TiB 6755178
Cheers,
James
On 04/06/2019, 08:59, "Marc Roos" wrote:
How did thi
Hi all,
After a bit of advice to ensure we’re approaching this the right way.
(version: 12.2.12, multi-mds, dirfrag is enabled)
We have corrupt meta-data as identified by ceph
health: HEALTH_ERR
2 MDSs report damaged metadata
Asking the mds via damage ls
{
"dam
sizes but keeping
object size * count = 4M. Does anyone have any experience finding
optimal rbd parameters for this scenario?
Thanks,
James
Zynstra is a private limited company registered in England and Wales
(registered number 07864369). Our registered office and Headquarters are at The
Inno
ud:xenial-pike
I'd recommend testing the upgrade in some sort of pre-production testing
environment first!
HTH
James
(ceph-* charm contributor and user)
[0] https://github.com/openstack/charms.ceph/blob/master/ceph/utils.py#L2539
___
ceph-use
Thanks, Tom and John, both of your input really helpful and helped to put
things into perspective.
Much appreciated.
@John, I am based out of Dubai.
On Wed, Aug 29, 2018 at 2:06 AM John Hearns wrote:
> James, you also use the words enterprise and production ready.
> Is Redhat s
Dear cephers,
I am new to the storage domain.
Trying to get my head around the enterprise - production-ready setup.
The following article helps a lot here: (Yahoo ceph implementation)
https://yahooeng.tumblr.com/tagged/object-storage
But a couple of questions:
What HDD would they have used here
Hi CEPHers,
I need to design an HA CEPH object storage system. The scenario is that we
are recording HD Videos and end of the day we need to copy all these video
files (each file has approx 15 TB ) to our storage system.
1)Which would be the best tech in storage to transfer these PBs size loads
o
(newbie warning - my first go-round with ceph, doing a lot of reading)
I have a small Ceph cluster, four storage nodes total, three dedicated to
data (OSD’s) and one for metadata. One client machine.
I made a network change. When I installed and configured the cluster, it was
done
using the syst
and hence benefit from more threads?
Many thanks
James
-BEGIN PGP SIGNATURE-
iQIzBAEBCAAdFiEEdM4X95Iy3BSBVEO8cwh2diG+igcFAlqw4h0ACgkQcwh2diG+
igcXAA/9GMV/Y7w1t65f13lgSs3km6AcRWsvxPVk+Xyq7CDON3XolWZqrAw+nPX4
zM7+pRWX8Lzzpz8/DxkwuytMrgA/BK9bsLeOWYMdqrOIQqYrTLs2Q41kWNVSDTUs
QgzpNR
y related to data integrity/resilience.
Could someone confirm my assertion is correct?
Many thanks
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks David.
Thanks again Cary.
If I have
682 GB used, 12998 GB / 13680 GB avail,
then I still need to divide 13680/3 (my replication setting) to get what my
total storage really is, right?
Thanks!
James Okken
Lab Manager
Dialogic Research Inc.
4 Gatehall Drive
Parsippany
NJ 07054
USA
Tel
3 1.09 291
TOTAL 13680G 682G 12998G 4.99
MIN/MAX VAR: 0.79/1.16 STDDEV: 0.67
Thanks!
-Original Message-
From: Cary [mailto:dynamic.c...@gmail.com]
Sent: Friday, December 15, 2017 4:05 PM
To: James Okken
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] add hard driv
2017 7:13 PM
To: James Okken
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)
James,
Usually once the misplaced data has balanced out the cluster should reach a
healthy state. If you run a "ceph health detail" Ceph will sho
ded, 26 active+recovery_wait+degraded, 1
active+remapped+backfilling, 308 active+clean, 176
active+remapped+wait_backfill; 333 GB data, 370 GB used, 5862 GB / 6233 GB
avail; 0 B/s rd, 334 kB/s wr
-Original Message-
From: Cary [mailto:dynamic.c...@gmail.com]
Sent: Thursday, December 1
Hi all,
Please let me know if I am missing steps or using the wrong steps
I'm hoping to expand my small CEPH cluster by adding 4TB hard drives to each of
the 3 servers in the cluster.
I also need to change my replication factor from 1 to 3.
This is part of an Openstack environment deployed by F
...@redhat.com]
Sent: Wednesday, November 8, 2017 9:53 AM
To: James Forde
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] VM Data corruption shortly after Luminous Upgrade
Are your QEMU VMs using a different CephX user than client.admin? If so, can
you double-check your caps to ensure that the
On my cluster I have a ceph-deploy node that is not a mon or osd. This is my
bench system, and I want to recreate the ceph-deploy node to simulate a
failure. I cannot find this outlined anywhere, so I thought I would ask.
Basically follow Preflight
http://docs.ceph.com/docs/master/start/quick-s
Title probably should have read "Ceph Data corruption shortly after Luminous
Upgrade"
Problem seems to have been sorted out. Still not sure why original problem
other than Upgrade latency?, or mgr errors?
After I resolved the boot problem I attempted to reproduce error, but was
unsuccessful whi
Weird but Very bad problem with my test cluster 2-3 weeks after upgrading to
Luminous.
All 7 running VM's are corrupted and unbootable. 6 Windows and 1 CentOS7.
Windows error is "unmountable boot volume". CentOS7 will only boot to emergency
mode.
3 VM's that were off during event work as expecte
Thanks again Ronny,
Ocfs2 is working well so far.
I have 3 nodes sharing the same 7TB MSA FC lun. Hoping to add 3 more...
James Okken
Lab Manager
Dialogic Research Inc.
4 Gatehall Drive
Parsippany
NJ 07054
USA
Tel: 973 967 5179
Email: james.ok...@dialogic.com
Web: www.dialogic.com
Thanks Ric, thanks again Ronny.
I have a lot of good info now! I am going to try ocfs2.
Thanks
-- Jim
-Original Message-
From: Ric Wheeler [mailto:rwhee...@redhat.com]
Sent: Thursday, September 14, 2017 4:35 AM
To: Ronny Aasen; James Okken; ceph-users@lists.ceph.com
Subject: Re: [ceph
me luns/osd's each, just to
learn how to work with ceph.
if you want to have FC SAN attached storage on servers, shareable
between servers in a usable fashion I would rather mount the same SAN
lun on multiple servers and use a cluster filesystem like ocfs or gfs
that is made for this ki
Hi,
Novice question here:
The way I understand CEPH is that it distributes data in OSDs in a cluster. The
reads and writes come across the ethernet as RBD requests and the actual data
IO then also goes across the ethernet.
I have a CEPH environment being setup on a fiber channel disk array (vi
Hello list,
I'm looking for some more information relating to CephFS and the 'Q'
size, specifically how to diagnose what contributes towards it rising
up
Ceph Version: 11.2.0.0
OS: CentOS 7
Kernel (Ceph Servers): 3.10.0-514.10.2.el7.x86_64
Kernel (CephFS Clients): 4.4.76-1.el7.elrepo.x86_64 - usi
Hi All
Thanks in advance for any help, I was wondering if anyone can help me with
a pickle I have gotten myself into!
I was in the process of adding OSD's to my small cluster (6 OSDs) and the
disk died halfway through, unfort I had left the defaults from when I was
bootstrapping the cluster in pl
1ms later.
Any further help is greatly appreciated.
On 17 May 2017 at 10:58, James Eckersall wrote:
> An update to this. The cluster has been upgraded to Kraken, but I've
> still got the same PG reporting inconsistent and the same error message
> about mds metadata damaged.
> C
ohn Spray [mailto:jsp...@redhat.com]
Sent: 23 May 2017 13:51
To: James Wilkins
Cc: Users, Ceph
Subject: Re: [ceph-users] MDS Question
On Tue, May 23, 2017 at 1:42 PM, James Wilkins
wrote:
> Quick question on CephFS/MDS but I can’t find this documented
> (apologies if it is)
>
>
&g
Quick question on CephFS/MDS but I can't find this documented (apologies if it
is)
What does the q: represent in a ceph daemon perf dump mds represent?
[root@hp3-ceph-mds2 ~]# ceph daemon
/var/run/ceph/ceph-mds.hp3-ceph-mds2.ceph.hostingp3.local.asok perf dump mds
{
"mds": {
"requ
o use it? I haven't been able to find any docs that explain.
Thanks
J
On 3 May 2017 at 14:35, James Eckersall wrote:
> Hi David,
>
> Thanks for the reply, it's appreciated.
> We're going to upgrade the cluster to Kraken and see if that fixes the
> metadata issu
Hi, no I have not seen any log entries related to scrubs.
I see slow requests for various operations including readdir, unlink.
Sometimes rdlock, sometimes wrlock.
On 12 May 2017 at 16:10, Peter Maloney
wrote:
> On 05/12/17 16:54, James Eckersall wrote:
> > Hi,
> >
> > We
sks
appear to be slow or excessively busy.
Does anyone have any ideas on how to diagnose what is causing this?
Thanks
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
:08:00 SRC=192.168.5.3
DST=192.168.3.2 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=35384 DF P
ROTO=TCP SPT=6817 DPT=52674 WINDOW=24576 RES=0x00 ACK FIN URGP=0
Many thanks
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi David,
Thanks for the reply, it's appreciated.
We're going to upgrade the cluster to Kraken and see if that fixes the
metadata issue.
J
On 2 May 2017 at 17:00, David Zafman wrote:
>
> James,
>
> You have an omap corruption. It is likely caused by a bug wh
Hi,
I'm having some issues with a ceph cluster. It's an 8 node cluster rnning
Jewel ceph-10.2.7-0.el7.x86_64 on CentOS 7.
This cluster provides RBDs and a CephFS filesystem to a number of clients.
ceph health detail is showing the following errors:
pg 2.9 is active+clean+inconsistent, acting [3
not to do it.
Can you direct me to a good tutorial on how to do so.
And, youre are right, I am a beginner.
James Okken
Lab Manager
Dialogic Research Inc.
4 Gatehall Drive
Parsippany
NJ 07054
USA
Tel: 973 967 5179
Email: james.ok...@dialogic.com
Web: www.dialogic.com – The Network Fuel
5494 192.168.1.4:6802/1005494
192.168.1.4:6803/1005494 192.168.0.6:6802/1005494 exists,up
ddfca14e-e6f6-4c48-aa8f-0ebfc765d32f
root@node-1:/var/log#
James Okken
Lab Manager
Dialogic Research Inc.
4 Gatehall Drive
Parsippany
NJ 07054
USA
Tel: 973 967 5179
Email: james.ok...@dialogic.com&l
Apologies if this is documented but I could not find any clear-cut advice
Is it better to have a higher PG count for the metadata pool, or the data pool
of a CephFS filesystem?
If I look at
http://www.slideshare.net/XiaoxiChen3/cephfs-jewel-mds-performance-benchmark -
specfically slide 06 - I
Hello,
Hoping to pick any users brains in relation to production CephFS deployments as
we're preparing to deploy CephFS to replace Gluster for our container based
storage needs.
(Target OS is Centos7 for both servers/clients & latest jewel release)
o) Based on our performance testing we're se
Hi,
Is there anyone in the community has experience of using "bcache" as backend of
Ceph?
Nowadays, maybe most Ceph solution are based on full-SSD or full-HDD as backend
data disks. So in order
to balance the cost and performance/capacity, we are trying the hybrid solution
with "bcache". It uti
Hi Gregory,
Many thanks for your reply. I couldn't spot any resources that describe/show
how you can successfully write / append to an EC pool with the librados API on
those links. Do you know of any such examples or resources? Or is it just
simply not possible?
Best regards,
James N
nd
my webserver thanks you!!!
James
On 7 October 2016 at 11:37, John Spray wrote:
> On Fri, Oct 7, 2016 at 8:04 AM, James Horner
> wrote:
> > Hi All
> >
> > Just wondering if anyone can help me out here. Small home cluster with 1
> > mon, the next phase of the pl
all my VMs into
rados. I don't care if CephFS gets wiped but I really need the vm images.
If the mon is borked permanently then is there a way I can recover the
images manually?
Thanks in advance for any help
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
all my VMs into
rados. I don't care if CephFS gets wiped but I really need the vm images.
If the mon is borked permanently then is there a way I can recover the
images manually?
Thanks in advance for any help
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Is it detrimental to us an EC pool
without a replicated pool? What are the performance costs of doing so?
Regarding point 3) Can you point me towards resources that describe what
features / abilities you lose by adopting an EC pool?
Many thanks in advance,
James Norman
signature.asc
Descr
Hi,
Not sure if anyone can help clarify or provide any suggestion on how to
troubleshoot this
We have a ceph cluster recently build up with ceph version Jewel, 10.2.2.
Based on "ceph -s" it shows that the data size is around 3TB but rawdata used
is only around 6TB,
as the ceph is set with 3
client io 334 MB/s rd, 319 MB/s wr, 5839 op/s rd, 4848 op/s wr
Regards,
James Webb
DevOps Engineer, Engineering Tools
Unity Technologies
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
w when the patched version is up.
>>
>> Thanks for your help, much appreciated.
>>
>> On 13 April 2016 at 11:19, James Page wrote:
>>
>>> n Wed, 13 Apr 2016 at 10:09 hp cre wrote:
>>>
>>>> I am using version .31 from either the ceph repo or the
n Wed, 13 Apr 2016 at 10:09 hp cre wrote:
> I am using version .31 from either the ceph repo or the one you updated in
> the ubuntu repo.
> It seems that these commits were not "commited" in version .31 ?
>
Yes - they post-date the .31 release; a patched version of .31 should be in
Xenial shortl
spect and follow the code for ceph-deploy, though i'm no
>> python developer.
>> It seems that in hosts/debian/__init__.py, line number 20, that if
>> distro.lower == ubuntu then it will return upstart.
>>
>> maybe thats the problem ?
>>
>> On 13
Hello
On Mon, 11 Apr 2016 at 22:08 hp cre wrote:
> Hey James,
> Did you check my steps? What did you do differently and worked for your?
>
Your steps look OK to me; I did pretty much the same, but with three nodes
instead on a single node - but I'm scratching my head as to why I
On Mon, 11 Apr 2016 at 21:35 hp cre wrote:
> I wanted to try the latest ceph-deploy. Thats why i downloaded this
> version (31). Latest ubuntu version is (20).
>
> I tried today at the end of the failed attempt to uninstall this version
> and install the one that came with xenial, but whatever i
10:18 James Page wrote:
> On Mon, 11 Apr 2016 at 10:02 hp cre wrote:
>
>> Hello James,
>>
>> It's a default install of xenial server beta 2 release. Created a user
>> then followed the ceph installation quick start exactly as it is.
>>
>> Ceph-deploy ver
On Mon, 11 Apr 2016 at 10:02 hp cre wrote:
> Hello James,
>
> It's a default install of xenial server beta 2 release. Created a user
> then followed the ceph installation quick start exactly as it is.
>
> Ceph-deploy version 1.5.31 was used as follows
>
> 1- ce
ing to create your mon? I'm wondering
whether its your deployment process rather than the ceph packages that have
a problem here.
Cheers
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
one pertinent to the systemd integration
Cheers
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi there,
Got a quick question regarding the CephFileSystem. After implementing the
setup from the quick start guide and having a Admin-Node, Monitor/Metadata
Server, OSD0 and OSD1. If I were to implement the mount on the Admin-Node
at the location /mnt/mycephfs, if I were then to transfer files t
Hi,
I'm looking to implement the CephFS on my Firefly release (v0.80) with
an XFS native file system, but so far I'm having some difficulties. After
following the ceph/qsg and creating a storage cluster, I have the following
topology
admin node - mds/mon
osd1
Hi,
I'm looking to create a Ceph storage architecture to store files, I'm
particularly interested in the metadata segregation so would be
implementing the Ceph Storage Cluster to start, with CephFS added once
done. I'm wondering what the best approach to storing data would be, for
example, conside
Hello,
So I run systems using gentoo's openrc. Ceph is interesting, but
in the long term will it be mandatory to use systemd to keep
using ceph?
Will there continue to be a supported branch that works with openrc?
Long range guidance is keenly appreciated.
up objects
independent from a list like other DFS
Thanks,
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
FS or to set-up Ceph Block Device client in
order to manage and store metadata?
Thanks,
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
e and then implementing
CephFS. Howevers, I was wondering whether I should add a new machine to the
topology, a 'client-machine', or whether this should double up with node1
(the monitor and metadata server) because it doesn't say mention in the
g
d not found
I assumed that this would be a common problem and Googling would allow me
to resolve the problem, however I have had no success finding a resolution.
Could anyone give me any ideas on what to try?
At the moment I'm not able to check the health of the clusters and so on.
Tha
I've just discovered the hashpspool setting and found that it is set to
false on all of my pools.
I can't really work out what this setting does though.
Can anyone please explain what this setting does and whether it would
improve my situation?
Thanks
J
On 11 November 2015 at 14
Hi, I'm having issues activating my OSDs. I have provided the output of the
fault. I can see that the error message has said that the connection is
timing out however, I am struggling to understand why as I have followed
each stage within the quick start guide. For example, I can ping node1
(which
Hi,
I have a Ceph cluster running on 0.80.10 and I'm having problems with the
data balancing on two new nodes that were recently added.
The cluster nodes look like as follows:
6x OSD servers with 32 4TB SAS drives. The drives are configured with
RAID0 in pairs, so 16 8TB OSD's per node.
New ser
Hi, I'm having issues activating my OSDs. I have provided the output of the
fault. I can see that the error message has said that the connection is
timing out however, I am struggling to understand why as I have followed
each stage within the quick start guide. For example, I can ping node1
(which
og of the error
messages I receive and would appreciate any advice. I have noticed that it
continually increments a .fault message repeatedly, it would be worth
noting that I have followed the QSG down to a tee.
Thanks, James
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/
Hi David,
Thank you for your suggestion. Unfortunately I did not understand what
was involved and in the process of trying to figure it out I think I
made it worse. Thankfully it's just a test environment so I just
rebuilt all the Ceph servers involved and how it's working.
Rega
I upgraded to 0.94.4 yesterday and now radosgw will not run on any of
the servers. The service itself will run, but it's not listening (using
civetweb with port 80 specified). Run manually I get the follow output:
root@dbp-ceph01:~# /usr/bin/radosgw --cluster=ceph --id radosgw.gateway
-d
2015-
I have an OSD that didn't come up after a reboot. I was getting the
error show below. it was running 0.94.3 so I reinstalled all packages.
I then upgraded everything to 0.94.4 hoping that would fix it but it
hasn't. There are three OSDs, this is the only one having problems (it
also contains th
Hi John,
Thanks for your explanations.
Actually, clients can. Clients can request fairly complex operations like
"read an xattr, stop if it's not there, now write the following discontinuous
regions of the file...". RADOS executes these transactions atomically.
[Ja
he cluster and starting from scratch since we have
backups of all of the data.
Thank you,
James Green
(401)-825-4401
From: Goncalo Borges [mailto:gonc...@physics.usyd.edu.au]
Sent: Thursday, October 15, 2015 3:32 AM
To: James Green ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph PGs stuc
Hello,
We recently had 2 nodes go down in our ceph cluster, one was repaired and the
other had all 12 osds destroyed when it went down. We brought everything back
online, there were several PGs that were showing as down+peering as well as
down. After marking the failed OSDs as lost and removing
nDisk
regarding to Ceph optimization for SSD including NVMe.
http://www.tomsitpro.com/articles/samsung-jbod-nvme-reference-system,1-2809.html
Regards,
James
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of J David
Sent: Wednesday, September 30, 2
Hi Quentin,
Samsung has so different type of SSD for different type of workload with
different SSD media like SLC,MLC,TLC ,3D NAND etc. They were designed for
different workloads for different purposes. Thanks for your understanding and
support.
Regards,
James
From: ceph-users [mailto:ceph
; the Gateway was working fine for a bit, but then started
throwing this error.
Turned out the back-end mon and osd daemons had not been restarted so
where still running firefly code - if you're running on Ubuntu, the
packaging won't restart daemons automatically - it has to be done
manual
Hi Anrija,
Your feedback is greatly appreciated.
Regards,
James
From: Andrija Panic [mailto:andrija.pa...@gmail.com]
Sent: Friday, September 04, 2015 12:39 PM
To: James (Fei) Liu-SSI
Cc: Quentin Hartman; ceph-users
Subject: Re: [ceph-users] which SSD / experiences with Samsung 843T vs. Intel
little bit strange over here.
Regards,
James
From: Andrija Panic [mailto:andrija.pa...@gmail.com]
Sent: Friday, September 04, 2015 12:21 PM
To: Quentin Hartman
Cc: James (Fei) Liu-SSI; ceph-users
Subject: Re: [ceph-users] which SSD / experiences with Samsung 843T vs. Intel
s3700
Quentin,
try
weeks, after a 3-4 months of being
in production (VMs/KVM/CloudStack)”
What you mean over here is that you deploy Ceph with CloudStack , am I
correct? The 2 SSDs vanished in 2~3 weeks is brand new Samsung 850 Pro 128GB,
right?
Thanks,
James
From: Andrija Panic [mailto:andrija.pa...@gmail.com
Hi Quentin and Andrija,
Thanks so much for reporting the problems with Samsung.
Would be possible to get to know your configuration of your system? What kind
of workload are you running? Do you use Samsung SSD as separate journaling
disk, right?
Thanks so much.
James
From: ceph-users
ne is using that default,
including me. Is there a reason that Straw is favored over Tree other than
it’s just the default. I’m interested in Tree because it looks like it
provides performance improvements over Straw with only a small reorganization
hit when OSD’s fail etc…
Thanks,
Yup I think you are correct, I see this listed under issues
https://github.com/ketor/ceph-dokan/issues/5
On Thu, Apr 30, 2015 at 12:58 PM, Gregory Farnum wrote:
> On Thu, Apr 30, 2015 at 9:49 AM, James Devine wrote:
>> So I am trying to get ceph-dokan to work. Upon running it with
So I am trying to get ceph-dokan to work. Upon running it with
./ceph-dokan.exe -c ceph.conf -l e it indicates there was a mount
error and the monitor it connects to logs cephx server client.admin:
unexpected key: req.key=0 expected_key=d7901d515f6b0c61
According to the debug output attached ceph
e on firefly at this time).
trusty and utopic are firefly (0.80.9).
vivid is hammer (0.94.1) - this is also available for trusty via the
Kilo Cloud Archive - see [0].
[0] https://wiki.ubuntu.com/ServerTeam/CloudArchive
- --
James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@
ning ;)'.
Nice spot - and this is not the first time I've seen a bug due to
incorrect specification of the stripe size for rbd images.
- --
James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
iQIcBAEBCA
Well, we’re a RedHat shop, so I’ll have to see what’s adaptable from there.
(Mint on all my home systems, so I’m not totally lost with Ubuntu )
From: Quentin Hartman [mailto:qhart...@direwolfdigital.com]
Sent: Thursday, March 26, 2015 1:15 PM
To: Steffen W Sørensen
Cc: LaBarre, James (CTR
For that matter, is there a way to build Calamari without going the whole
vagrant path at all? Some way of just building it through command-line tools?
I would be building it on an Openstack instance, no GUI. Seems silly to have
to install an entire virtualbox environment inside something tha
ceph-mon log: http://pastebin.com/ndaYLPYa
ceph-create-keys output: http://pastebin.com/wXT1U1wb
Does anybody have an idea what might be wrong here?
--
James Oakley
jf...@funktronics.ca
___
ceph-users mailing list
ceph-users@lists.ceph.com
u in
our 15.01 charm release - mod-fastcgi was causing so many headaches!
So far in our internal QA cloud its been very reliable running with
civetweb - we run three units of the ceph-radosgw charm frontend by
haproxy + VIP.
I'd +1 switching focus to this approach.
Cheers
James
- --
James
I'm running Ubuntu 14.04 servers with Firefly and I don't have a sysvinit
file, but I do have an upstart file.
"touch /var/lib/ceph/osd/ceph-XX/upstart" should be all you need to do.
That way, the OSD's should be mounted automatically on boot.
On 30 January 2015 at 10:25, Alexis KOALLA wrote:
>
COW operations are still troublesome
with ceph? So using a raid1 on each node with btrfs will allow me
to turn off COW if/when those sorts of issues arise.
What I need help with right now is setting up the UUID based /etc/fstab
and suggestions on exactly how to configure ceph(fs).
My desire is to keep the btrfs-gentoo installs stable but to be able
to use ansible or other (ceph based tools) to reconfigure ceph or recover
from ceph failures.
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
p of Btrfs is the most challenging part of this journey so
far. I use openrc on gentoo, and have no interest in systemd, just so
you know.
James
[1] https://amplab.cs.berkeley.edu/
[2] http://dune.mathematik.uni-freiburg.de/
[3] http://www.opengeosys.org/
[4] http://w
Hello,
I was wondering if anyone has Mesos running on top of Ceph?
I want to test/use Ceph if lieu of HDFS.
I'm working on Gentoo, but any experiences with Mesos on Ceph
are of keen interest to me as related to performance, stability
and any difficulties experienced.
op of it.
> -Greg
cockroachDB might be what you are looking for?
http://cockroachdb.org/
hth,
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
http://kernel.ubuntu.com/~kernel-ppa/mainline/
I'm running 3.17 on my trusty clients without issue
On Fri, Dec 5, 2014 at 9:37 AM, Antonio Messina wrote:
> On Fri, Dec 5, 2014 at 4:25 PM, Nick Fisk wrote:
> > This is probably due to the Kernel RBD client not being recent enough.
> Have
> > you
1 - 100 of 305 matches
Mail list logo