Re: [ceph-users] [ceph-calamari] Does anyone understand Calamari??

2015-05-13 Thread Steffen W Sørensen
> On 13/05/2015, at 11.23, Steffen W Sørensen wrote: > > >> On 13/05/2015, at 04.08, Gregory Meno > <mailto:gm...@redhat.com>> wrote: >> >> Ideally I would like everything in /var/log/calmari >> >> be sure to set calamari.conf like so:

Re: [ceph-users] [ceph-calamari] Does anyone understand Calamari??

2015-05-13 Thread Steffen W Sørensen
> On 13/05/2015, at 04.08, Gregory Meno wrote: > > Ideally I would like everything in /var/log/calmari > > be sure to set calamari.conf like so: > [shadow_man@vpm107 ~]$ grep DEBUG /etc/calamari/calamari.conf > log_level = DEBUG > db_log_level = DEBUG > log_level = DEBUG > > then restart cthu

Re: [ceph-users] New Calamari server

2015-05-13 Thread Steffen W Sørensen
> On 12/05/2015, at 19.51, Bruce McFarland > wrote: > > I am having a similar issue. The cluster is up and salt is running on and has > accepted keys from all nodes, including the monitor. I can issue salt and > salt/ceph.py commands from the Calamari including 'salt \* > ceph.get_heartbeats

Re: [ceph-users] The first infernalis dev release will be v9.0.0

2015-05-05 Thread Steffen W Sørensen
> On 05/05/2015, at 18.52, Sage Weil wrote: > > On Tue, 5 May 2015, Tony Harris wrote: >> So with this, will even numbers then be LTS? Since 9.0.0 is following >> 0.94.x/Hammer, and every other release is normally LTS, I'm guessing 10.x.x, >> 12.x.x, etc. will be LTS... > > It looks that way n

[ceph-users] sparse RBD devices

2015-05-05 Thread Steffen W Sørensen
I’ve live migrated RBD images of our VMs (with ext4 FS) through our Proxmox PVE cluster from one pool to anther and now it seems those device are no longer so sparse as before, ie. pool usage has grown to almost sum of full image sizes, wondering if there’s a way to untrim RBD images to become m

Re: [ceph-users] xfs corruption, data disaster!

2015-05-04 Thread Steffen W Sørensen
> On 04/05/2015, at 15.01, Yujian Peng wrote: > > Alexandre DERUMIER writes: > >> >> >> maybe this could help to repair pgs ? >> >> http://www.sebastien-han.fr/blog/2015/04/27/ceph-manually-repair-object/ >> >> (6 disk at the same time seem pretty strange. do you have some kind of >> writ

Re: [ceph-users] How to estimate whether putting a journal on SSD will help with performance?

2015-05-01 Thread Steffen W Sørensen
Also remember to drive your Ceph cluster as hard as you got means to, eg. tuning the VM OSes/IO sub systems like using multiple RBD devices per VM (to issue more out standing IOPs from VM IO subsystem), best IO scheduler, CPU power + memory per VM, also ensure low network latency + bandwidth bet

Re: [ceph-users] Calamari server not working after upgrade 0.87-1 -> 0.94-1

2015-04-27 Thread Steffen W Sørensen
> On 27/04/2015, at 15.51, Alexandre DERUMIER wrote: > > Hi, can you check on your ceph node > /var/log/salt/minion ? > > I have had some similar problem, I have need to remove > > rm /etc/salt/pki/minion/minion_master.pub > /etc/init.d/salt-minion restart > > (I don't known if "calamari-ctl

[ceph-users] Calamari server not working after upgrade 0.87-1 -> 0.94-1

2015-04-27 Thread Steffen W Sørensen
All, After successfully upgrading from Giant to Hammer, at first our Calamari server seems fine, showing the new too many PGs. then during/after removing/consolidating various pool, it failed to get updated, Haven’t been able to find any RC, I decided to flush the Postgress DB (calamari-ctl cle

Re: [ceph-users] ceph-fuse unable to run through "screen" ?

2015-04-24 Thread Steffen W Sørensen
> On 23/04/2015, at 12.48, Burkhard Linke > wrote: > > Hi, > > I had a similar problem during reboots. It was solved by adding '_netdev' to > the options for the fstab entry. Otherwise the system may try to mount the > cephfs mount point before the network is available. Didn’t knew of the _n

Re: [ceph-users] ceph-fuse unable to run through "screen" ?

2015-04-23 Thread Steffen W Sørensen
> On 23/04/2015, at 10.24, Florent B wrote: > > I come back with this problem because it persists even after upgrading > to Hammer. > > With CephFS, it does not work, and the only workaround I found does not > work 100% of time : I also found issues at reboots also, becaouse starting Ceph fuse

Re: [ceph-users] Cephfs: proportion of data between data pool and metadata pool

2015-04-23 Thread Steffen W Sørensen
> But in the menu, the use case "cephfs only" doesn't exist and I have > no idea of the %data for each pools metadata and data. So, what is > the proportion (approximatively) of %data between the "data" pool and > the "metadata" pool of cephfs in a cephfs-only cluster? > > Is it rather metadata=20

Re: [ceph-users] Ceph Hammer question..

2015-04-23 Thread Steffen W Sørensen
> I have a cluster currently on Giant - is Hammer stable/ready for production > use? Assume so, upgraded a 0.87-1 to 0.94-1, only thing that came up was that now Ceph will warn if you got too many PGs (>300/OSD) which it turned out I and others had. So had too do pool consolidation in order to a

Re: [ceph-users] replace dead SSD journal

2015-04-18 Thread Steffen W Sørensen
> On 17/04/2015, at 21.07, Andrija Panic wrote: > > nahSamsun 850 PRO 128GB - dead after 3months - 2 of these died... wearing > level is 96%, so only 4% wasted... (yes I know these are not enterprise,etc… ) Damn… but maybe your surname says it all - Don’t Panic :) But making sure same type

Re: [ceph-users] replace dead SSD journal

2015-04-17 Thread Steffen W Sørensen
> I have 1 SSD that hosted 6 OSD's Journals, that is dead, so 6 OSD down, ceph > rebalanced etc. > > Now I have new SSD inside, and I will partition it etc - but would like to > know, how to proceed now, with the journal recreation for those 6 OSDs that > are down now. Well assuming the OSDs ar

Re: [ceph-users] metadata management in case of ceph object storage and ceph block storage

2015-04-17 Thread Steffen W Sørensen
> On 17/04/2015, at 07.33, Josef Johansson wrote: > > To your question, which I’m not sure I understand completely. > > So yes, you don’t need the MDS if you just keep track of block storage and > object storage. (i.e. images for KVM) > > So the Mon keeps track of the metadata for the Pool an

Re: [ceph-users] Upgrade from Giant 0.87-1 to Hammer 0.94-1

2015-04-16 Thread Steffen W Sørensen
> On 16/04/2015, at 11.09, Christian Balzer wrote: > > On Thu, 16 Apr 2015 10:46:35 +0200 Steffen W Sørensen wrote: > >>> That later change would have _increased_ the number of recommended PG, >>> not decreased it. >> Weird as our Giant health stat

Re: [ceph-users] Upgrade from Giant 0.87-1 to Hammer 0.94-1

2015-04-16 Thread Steffen W Sørensen
On 16/04/2015, at 01.48, Steffen W Sørensen wrote: > > Also our calamari web UI won't authenticate anymore, can’t see any issues in > any log under /var/log/calamari, any hints on what to look for are > appreciated, TIA! Well this morning it will authenticate me, but seems ca

Re: [ceph-users] Upgrade from Giant 0.87-1 to Hammer 0.94-1

2015-04-16 Thread Steffen W Sørensen
> That later change would have _increased_ the number of recommended PG, not > decreased it. Weird as our Giant health status was ok before upgrading to Hammer… > With your cluster 2048 PGs total (all pools combined!) would be the sweet > spot, see: > > http://ceph.com/pgcalc/

Re: [ceph-users] Upgrade from Giant 0.87-1 to Hammer 0.94-1

2015-04-15 Thread Steffen W Sørensen
4.1-1~bpo70+1 amd64 Python libraries for the Ceph librbd library > On 16/04/2015, at 00.41, Steffen W Sørensen wrote: > > Hi, > > Successfully upgrade a small development 4x node Giant 0.87-1 cluster to > Hammer 0.94-1, each node with 6x OSD - 146GB, 19 p

[ceph-users] Upgrade from Giant 0.87-1 to Hammer 0.94-1

2015-04-15 Thread Steffen W Sørensen
Hi, Successfully upgrade a small development 4x node Giant 0.87-1 cluster to Hammer 0.94-1, each node with 6x OSD - 146GB, 19 pools, mainly 2 in usage. Only minor thing now ceph -s complaining over too may PGs, previously Giant had complain of too few, so various pools were bumped up till health

Re: [ceph-users] All client writes block when 2 of 3 OSDs down

2015-03-26 Thread Steffen W Sørensen
> On 26/03/2015, at 23.36, Somnath Roy wrote: > > Got most portion of it, thanks ! > But, still not able to get when second node is down why with single monitor > in the cluster client is not able to connect ? > 1 monitor can form a quorum and should be sufficient for a cluster to run. To have

Re: [ceph-users] Migrating objects from one pool to another?

2015-03-26 Thread Steffen W Sørensen
m one glance store over local file and upload back into a different glance back end store. Again this is properly better than dealing at a lower abstraction level and having to known its internal storage structures and avoid what you’re pointing put Greg. > > On Thu, Mar 26, 2015 at 3:05

Re: [ceph-users] Migrating objects from one pool to another?

2015-03-26 Thread Steffen W Sørensen
> On 26/03/2015, at 23.01, Gregory Farnum wrote: > > On Thu, Mar 26, 2015 at 2:53 PM, Steffen W Sørensen <mailto:ste...@me.com>> wrote: >> >>> On 26/03/2015, at 21.07, J-P Methot wrote: >>> >>> That's a great idea. I know I can set

Re: [ceph-users] Migrating objects from one pool to another?

2015-03-26 Thread Steffen W Sørensen
> On 26/03/2015, at 22.53, Steffen W Sørensen wrote: > >> >> On 26/03/2015, at 21.07, J-P Methot > <mailto:jpmet...@gtcomm.net>> wrote: >> >> That's a great idea. I know I can setup cinder (the openstack volume >> manager) as a mult

Re: [ceph-users] Migrating objects from one pool to another?

2015-03-26 Thread Steffen W Sørensen
artition list of objects into multiple concurrent loops, possible from multiple boxes as seems fit for resources at hand, cpu, memory, network, ceph perf. /Steffen > > > > On 3/26/2015 3:54 PM, Steffen W Sørensen wrote: >>> On 26/03/2015, at 20.38, J-P Methot wrote: >

Re: [ceph-users] Migrating objects from one pool to another?

2015-03-26 Thread Steffen W Sørensen
> On 26/03/2015, at 20.38, J-P Methot wrote: > > Lately I've been going back to work on one of my first ceph setup and now I > see that I have created way too many placement groups for the pools on that > setup (about 10 000 too many). I believe this may impact performances > negatively, as th

Re: [ceph-users] Calamari Deployment

2015-03-26 Thread Steffen W Sørensen
> On 26/03/2015, at 17.18, LaBarre, James (CTR) A6IT > wrote: > For that matter, is there a way to build Calamari without going the whole > vagrant path at all? Some way of just building it through command-line > tools? I would be building it on an Openstack instance, no GUI. Seems silly >

Re: [ceph-users] more human readable log to track request or using mapreduce for data statistics

2015-03-26 Thread Steffen W Sørensen
ceph developers to alter ceph code base to complement your exact need when you still wants to filter output through grep whatever anyway ImHO :) > > 2015-03-26 16:38 GMT+08:00 Steffen W Sørensen : >> >> On 26/03/2015, at 09.05, 池信泽 wrote: >> >> hi,ceph: >&

Re: [ceph-users] more human readable log to track request or using mapreduce for data statistics

2015-03-26 Thread Steffen W Sørensen
On 26/03/2015, at 09.05, 池信泽 wrote: > hi,ceph: > > Currently, the command ”ceph --admin-daemon > /var/run/ceph/ceph-osd.0.asok dump_historic_ops“ may return as below: > > { "description": "osd_op(client.4436.1:11617 > rb.0.1153.6b8b4567.0192 [] 2.8eb4757c ondisk+write e92)", >

Re: [ceph-users] Mapping users to different rgw pools

2015-03-23 Thread Steffen W Sørensen
My vag understanding is that this is mapped through the zone associated with the specific user. So define your desiree pools and zones mapping to the pools and assign users to desired regions+zones and thus to different pools per user. > Den 13/03/2015 kl. 07.48 skrev Sreenath BH : > > Hi all,

Re: [ceph-users] Giant 0.87 update on CentOs 7

2015-03-22 Thread Steffen W Sørensen
> :) Now disabling epel which seems the confusing Repo above just renders me > with TOs from http://ceph.com … are Ceph.com > down currently? http://eu.ceph.com answers currently… properly the trans-atlantic line or my provider :/ > >

Re: [ceph-users] Giant 0.87 update on CentOs 7

2015-03-22 Thread Steffen W Sørensen
> On 22/03/2015, at 22.28, Steffen W Sørensen wrote: > > All, > > Got a test cluster on a 4 node CentOs 7, which fails to pull updates, any > hints: > > > [root@n1 ~]# rpm -qa | grep -i ceph > python-ceph-0.87-0.el7.centos.x86_64 > ceph-release-1-0.el7.noa

[ceph-users] Giant 0.87 update on CentOs 7

2015-03-22 Thread Steffen W Sørensen
All, Got a test cluster on a 4 node CentOs 7, which fails to pull updates, any hints: [root@n1 ~]# rpm -qa | grep -i ceph python-ceph-0.87-0.el7.centos.x86_64 ceph-release-1-0.el7.noarch ceph-deploy-1.5.21-0.noarch ceph-common-0.87-0.el7.centos.x86_64 libcephfs1-0.87-0.el7.centos.x86_64 ceph-0.8

Re: [ceph-users] cciss driver package for RHEL7

2015-03-20 Thread Steffen W Sørensen
> generically recognize that controller out of the box. I known, got the same issue when utilizing old proliants for test/PoC with newer SW. Maybe we should try to use such old raid ctlrs similar to this for OSD journaling and avoid wearability issues as with SSDs :) /Steffen > > F

Re: [ceph-users] cciss driver package for RHEL7

2015-03-19 Thread Steffen W Sørensen
> On 19/03/2015, at 15.57, O'Reilly, Dan wrote: > > I understand there’s a KMOD_CCISS package available. However, I can’t find > it for download. Anybody have any ideas? Oh I believe HP swapped cciss for hpsa (Smart Array) driver long ago… so maybe only download cciss latest source and then c

Re: [ceph-users] Issue with Ceph mons starting up- leveldb store

2015-03-19 Thread Steffen W Sørensen
On 19/03/2015, at 15.50, Andrew Diller wrote: > We moved the data dir over (/var/lib/ceph/mon) from one of the good ones to > this 3rd node, but it won't start- we see this error, after which no further > logging occurs: > > 2015-03-19 06:25:05.395210 7fcb57f1c7c0 -1 failed to create new leveld

Re: [ceph-users] [SPAM] Changing pg_num => RBD VM down !

2015-03-16 Thread Steffen W Sørensen
On 16/03/2015, at 12.23, Alexandre DERUMIER wrote: >>> We use Proxmox, so I think it uses librbd ? > > As It's me that I made the proxmox rbd plugin, I can confirm that yes, it's > librbd ;) > Is the ceph cluster on dedicated nodes ? or vms are running on same nodes > than osd daemons ? My c

Re: [ceph-users] [SPAM] Changing pg_num => RBD VM down !

2015-03-16 Thread Steffen W Sørensen
On 16/03/2015, at 11.14, Florent B wrote: > On 03/16/2015 11:03 AM, Alexandre DERUMIER wrote: >> This is strange, that could be: >> >> - qemu crash, maybe a bug in rbd block storage (if you use librbd) >> - oom-killer on you host (any logs ?) >> >> what is your qemu version ? >> > > Now, we

Re: [ceph-users] Add monitor unsuccesful

2015-03-12 Thread Steffen W Sørensen
> On 12/03/2015, at 20.00, Jesus Chavez (jeschave) wrote: > > Thats what I thought and did actually the monmap and keyring were copied to > the new monitor and there with 2 elements I did the mkfs thing and still have > that Messages, do I need osd configured? Because I have non and I am not

Re: [ceph-users] Add monitor unsuccesful

2015-03-12 Thread Steffen W Sørensen
On 12/03/2015, at 03.08, Jesus Chavez (jeschave) wrote: > Thanks Steffen I have followed everything not sure what is going on, the mon > keyring and client admin are individual? Per mon host? Or do I need to copy > from the first initial mon node? I'm no expert, but I would assume keyring could

Re: [ceph-users] Add monitor unsuccesful

2015-03-11 Thread Steffen W Sørensen
On 12/03/2015, at 00.55, Jesus Chavez (jeschave) wrote: > can anybody tell me a good blog link that explain how to add monitor? I have > tried manually and also with ceph-deploy without success =( Dunno if these might help U: http://ceph.com/docs/master/rados/operations/add-or-rm-mons/#adding-

Re: [ceph-users] S3 RadosGW - Create bucket OP

2015-03-11 Thread Steffen W Sørensen
On 11/03/2015, at 08.19, Steffen W Sørensen wrote: > On 10/03/2015, at 23.31, Yehuda Sadeh-Weinraub wrote: > >>>> What kind of application is that? >>> Commercial Email platform from Openwave.com >> >> Maybe it could be worked around using an apache

Re: [ceph-users] Duplication name Container

2015-03-11 Thread Steffen W Sørensen
On 11/03/2015, at 15.31, Wido den Hollander wrote: > On 03/11/2015 03:23 PM, Jimmy Goffaux wrote: >> Hello All, >> >> I use Ceph in production for several months. but i have an errors with >> Ceph Rados Gateway for multiple users. >> >> I am faced with the following error: >> >> Error trying t

Re: [ceph-users] S3 RadosGW - Create bucket OP

2015-03-11 Thread Steffen W Sørensen
On 11/03/2015, at 08.19, Steffen W Sørensen wrote: > On 10/03/2015, at 23.31, Yehuda Sadeh-Weinraub wrote: > >>>> What kind of application is that? >>> Commercial Email platform from Openwave.com >> >> Maybe it could be worked around using an apache

Re: [ceph-users] S3 RadosGW - Create bucket OP

2015-03-11 Thread Steffen W Sørensen
On 10/03/2015, at 23.31, Yehuda Sadeh-Weinraub wrote: >>> What kind of application is that? >> Commercial Email platform from Openwave.com > > Maybe it could be worked around using an apache rewrite rule. In any case, I > opened issue #11091. Okay, how, by rewriting the response? Thanks, where

Re: [ceph-users] EC Pool and Cache Tier Tuning

2015-03-10 Thread Steffen W Sørensen
On 09/03/2015, at 22.44, Nick Fisk wrote: > Either option #1 or #2 depending on if your data has hot spots or you need > to use EC pools. I'm finding that the cache tier can actually slow stuff > down depending on how much data is in the cache tier vs on the slower tier. > > Writes will be about

Re: [ceph-users] tgt and krbd

2015-03-07 Thread Steffen W Sørensen
On 06/03/2015, at 22.47, Jake Young wrote: > > I wish there was a way to incorporate a local cache device into tgt with > > librbd backends. > What about a ram disk device like rapid disk+cache in front of your rbd block > device > > http://www.rapiddisk.org/?page_id=15#rapiddisk > > /Steffe

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Steffen W Sørensen
On 06/03/2015, at 16.50, Jake Young wrote: > > After seeing your results, I've been considering experimenting with that. > Currently, my iSCSI proxy nodes are VMs. > > I would like to build a few dedicated servers with fast SSDs or fusion-io > devices. It depends on my budget, it's hard t

Re: [ceph-users] S3 RadosGW - Create bucket OP

2015-03-06 Thread Steffen W Sørensen
On 06/03/2015, at 12.24, Steffen W Sørensen wrote: > 3. What are BCP for maintaining GW pools, need I run something like GC / > cleanup OPs / log object pruning etc. any pointers to doc here for? Is this all manitaince one should consider on pools for a GW instance? http://ceph.co

[ceph-users] S3 RadosGW - Create bucket OP

2015-03-06 Thread Steffen W Sørensen
Hi > Check the S3 Bucket OPS at : http://ceph.com/docs/master/radosgw/s3/bucketops/ I've read that as well, but I'm having other issues getting an App to run against our Ceph S3 GW, maybe you have a few hints on this as well... Got the cluster working for rbd+cephFS and have initial verified the

Re: [ceph-users] Mail not reaching the list?

2015-03-01 Thread Steffen W Sørensen
On 01/03/2015, at 06.03, Sudarshan Pathak wrote: > Mail is landed in Spam. > > Here is message from google: > Why is this message in Spam? It has a from address in yahoo.com but has > failed yahoo.com's required tests for authentication. Learn more Maybe Tony didn't send through a Yahoo SMTP

Re: [ceph-users] too few pgs in cache tier

2015-02-27 Thread Steffen W Sørensen
On 27/02/2015, at 17.04, Udo Lembke wrote: > ceph health detail > HEALTH_WARN pool ssd-archiv has too few pgs Slightly different I had an issue with my Ceph Cluster underneath a PVE cluster yesterday. Had two Ceph pools for RBD virt disks, vm_images (boot hdd images) + rbd_data (extra hdd imag

Re: [ceph-users] RadosGW S3ResponseError: 405 Method Not Allowed

2015-02-27 Thread Steffen W Sørensen
On 27/02/2015, at 19.02, Steffen W Sørensen wrote: > Into which pool does such user data (buckets and objects) gets stored and > possible howto direct user data into a dedicated pool? > > [root@rgw ~]# rados df > pool name category KB objec

Re: [ceph-users] RadosGW S3ResponseError: 405 Method Not Allowed

2015-02-27 Thread Steffen W Sørensen
On 27/02/2015, at 18.51, Steffen W Sørensen wrote: >> rgw enable apis = s3 > Commenting this out makes it work :) Thanks for helping on this initial issue! > [root@rgw tests3]# ./lsbuckets.py > [root@rgw tests3]# ./lsbuckets.py > my-new-bucket 2015-02-27T17:49:0

Re: [ceph-users] RadosGW S3ResponseError: 405 Method Not Allowed

2015-02-27 Thread Steffen W Sørensen
> rgw enable apis = s3 Commenting this out makes it work :) [root@rgw tests3]# ./lsbuckets.py [root@rgw tests3]# ./lsbuckets.py my-new-bucket 2015-02-27T17:49:04.000Z [root@rgw tests3]# ... 2015-02-27 18:49:22.601578 7f48f2bdd700 20 rgw_create_bucket returned ret=-17 bucket=my-new-b

Re: [ceph-users] RadosGW S3ResponseError: 405 Method Not Allowed

2015-02-27 Thread Steffen W Sørensen
> That's the old way of defining pools. The new way involves in defining a zone > and placement targets for that zone. Then you can have different default > placement targets for different users. Anu URL/pointers to better understand such matters? > Do you have any special config in your ceph.co

Re: [ceph-users] RadosGW S3ResponseError: 405 Method Not Allowed

2015-02-27 Thread Steffen W Sørensen
On 27/02/2015, at 17.20, Yehuda Sadeh-Weinraub wrote: > I'd look at two things first. One is the '{fqdn}' string, which I'm not sure > whether that's the actual string that you have, or whether you just replaced > it for the sake of anonymity. The second is the port number, which should be > f

[ceph-users] RadosGW S3ResponseError: 405 Method Not Allowed

2015-02-27 Thread Steffen W Sørensen
Sorry forgot to send to the list... Begin forwarded message: > From: Steffen W Sørensen > Subject: Re: [ceph-users] RadosGW S3ResponseError: 405 Method Not Allowed > Date: 27. feb. 2015 18.29.51 CET > To: Yehuda Sadeh-Weinraub > > >> It seems that your request did fin

[ceph-users] Minor flaw in /etc/init.d/ceph-radsgw script

2015-02-27 Thread Steffen W Sørensen
Hi Seems there's a minor flaw in CentOS/RHEL niit script: line 91 reads: daemon --user="$user" "$RADOSGW -n $name" should ImHO be: daemon --user="$user" $RADOSGW -n $name to avoid /etc/rc.d/init.d/functions:__pids_var_run line 151 complain in dirname :) /Steffe

[ceph-users] RadosGW S3ResponseError: 405 Method Not Allowed

2015-02-27 Thread Steffen W Sørensen
Hi, Newbie to RadosGW+Ceph, but learning... Got a running Ceph Cluster working with rbd+CephFS clients. Now I'm trying to verify a RadosGW S3 api, but seems to have an issue with RadosGW access. I get the error (not found anything searching so far...): S3ResponseError: 405 Method Not Allowed w