I know nothing about mezzanine, but if you have access to the view
functions, then perhaps you can use a construct along the lines of:

|def view_function(request):|
|  if request.user.is_superuser:|
|    return HttpResponse404() # Or perhaps render the view treating all
data as unsafe...|
|  else|
|    # render the page as previously...|

This could be extended to prevent staff users from being subverted in
the same way.  Perhaps a decorator could be used instead (eg
@unsafe_for_superuser, @unsafe_for_staff).

This modifies "don't ever trust user content" to "don't trust admin
content if you're a superuser".

John

On 12/05/12 19:45, Nikolas Stevenson-Molnar wrote:
> The issue here is that Josh wants to allow certain users (admins) to
> create content with <script> tags, but ensure that said users can't
> use JavaScript to gain superuser status or otherwise monkey with
> things they shouldn't. So while the "don't trust user content"
> approach is a good default, I don't think it applies in this case. And
> while this may not be//cross site, per se, it is still request forgery.
>
> _Nik
>
> On 5/11/2012 7:13 PM, Russell Keith-Magee wrote:
>> On Sat, May 12, 2012 at 5:11 AM, Josh Cartmell <joshcar...@gmail.com> wrote:
>>> I work a lot with Mezzanine which is a CMS that uses Django.  A
>>> security issue was recently revealed where an admin user, lets call
>>> him A, (they can post rich content) could put a cleverly constructed
>>> javascript on a page such that if a superuser, let's call her B, then
>>> visited the page it would elevate A to superuser status (a more
>>> thorough explanation is here:
>>> http://groups.google.com/group/mezzanine-users/browse_thread/thread/14fde9d8bc71555b/8208a128dbe314e8?lnk=gst&q=security).
>>> Apparently any django app which allowed admin users to post arbitrary
>>> html would be vulnerable.
>>>
>>> My first thought was that csrf protection should prevent this but alas
>>> that is not the case.  The only real solution found is to restrict
>>> admin users from posting any javascript in their content, unless you
>>> completely trust the admin users.
>> This isn't a CSRF issue. CSRF stands for Cross Site Request Forgery. A
>> CSRF attack is characterised by:
>>
>>  * A user U on site S, who has credentials for the site S, and is logged in.
>>
>>  * An attacking site X that is visited by U.
>>
>>  * Site X submits a form (by POST or GET) directly to site S; because
>> U is logged in on S, the post is accepted as if it came from U
>> directly.
>>
>> CSRF protection ensures that site X can't submit the form on the
>> behalf of U - the CSRF token isn't visible to the attacker site, so
>> they can't provide a token that will allow their submission to be
>> accepted.
>>
>> What you're referring to is an injection attack. An injection attack
>> occurs whenever user content is accepted and trusted on face value;
>> the attack occurs when that content is then rendered.
>>
>> The canonical example of an injection is "little johnny tables":
>> http://xkcd.com/327/
>>
>> However, the injected content isn't just SQL; all sorts of content can
>> be injected for an attack. In this case, you're talking about B
>> injecting javascript onto a page viewed by A; when A views the page,
>> the javascript will be executed with A's permissions, allowing B to
>> modify the site as if they A.
>>
>> Django already has many forms of protection against injection attacks.
>> In this case, the protection comes by way of Django's default template
>> rendering using escaped mode. If you have a template:
>>
>> {{ content }}
>>
>> and context (possibly extracted from the database):
>>
>> <script>alert('hello')</script>
>>
>> Django will render this as:
>>
>> &lt;script&gt;alert('hello')&lt;script&gt;
>>
>> which will be interpreted as text, not as a script tag injected into your 
>> page.
>>
>> That said, the protection can be turned off. If you modify the template to 
>> read:
>>
>> {{ content|safe }}
>>
>> or
>>
>> {% autoescape off %}
>> {{ content }}
>> {% endautoescape %}
>>
>> or you mark the incoming string as "mark_safe" in the template
>> context, then the content will be rendered verbatim -- which means
>> that the Javascript will be executed.
>>
>> I'm not intimately familiar with Mezzanine or DjangoCMS, but based on
>> the nature of those tools (i.e., tools for building end-visible
>> content), I'm guessing they've marked content as safe specifically so
>> that end users can easily configure their CMS sites by putting HTML
>> into a field somewhere on the site. The side effect is that they're
>> implicitly saying that *all* user-supplied content is safe, which
>> provides the channel by which an attacker can do his/her thing.
>>
>> The lesson from this? Even when you think you can trust a user's
>> content, you can't trust a user's content :-)
>>
>>> My question is are there any other solutions to these sorts of
>>> problems?  It seems like allowing an admin user to post javascript is
>>> reasonable, what is unreasonable is for that javascript to be able to
>>> elevate a user's privilege.  Could improvements be made to the csrf
>>> mechanism to prevent this sort of user privilege elevation?
>> As I've indicated, there is a solution, and Django already implements
>> it. It involves escaping content, and has nothing to do with CSRF.
>>
>> In the case of Mezzanine, they've fixed the problem by implementing a
>> 'cleansing' process - i.e., still accepting the content as 'safe', but
>> post-processing it to make sure that it really *is* safe, by stripping
>> out <script> tags or anything else that might provide an injection
>> channel.
>>
>> While I can fully understand why Stephen has taken this approach for
>> Mezzanine, I'm not convinced it's a good answer in the general case.
>> CMS solutions are an edge case -- they actually *want* to accept HTML
>> content from the end user, so that it can be rendered.
>>
>> The problem with cleansing is that it doesn't fix the problem -- it
>> just narrows the attack window. Ok, so lets say your cleanser removes
>> <script> tags; that's fixed one obvious way to inject. But what about
>> <a href="…" onclick="alert('hello')"> That's still javascript content
>> that could be used for an attack; your attacker just needs to socially
>> engineer the user to click on the link. So, you update your cleaners
>> to strip onclick attributes -- at which point, the attacker finds a
>> new way to inject, or they find a bug in your cleansing library, or
>> they find the one input field on your site that you accidentally
>> forgot to cleanse…  you're now engaged in an arms race with your
>> attackers.
>>
>> The default Django position of "don't *ever* trust user content" is
>> ultimately the safest approach, which is why Django implements it.
>> Django does provide a way to disable that protection, but it really
>> should be done as a last resort.
>>
>> That said -- we're always open to suggestions. If anyone has any good
>> ideas for preventing injection attacks (or any other type of attack,
>> for that matter), let us know. You can't have enough out-of-the-box
>> security.
>>
>> Yours,
>> Russ Magee %-)
>>
> -- 
> You received this message because you are subscribed to the Google
> Groups "Django users" group.
> To post to this group, send email to django-users@googlegroups.com.
> To unsubscribe from this group, send email to
> django-users+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/django-users?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com.
To unsubscribe from this group, send email to 
django-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en.

Reply via email to