Re: Changing the representation of rectangles in platform code

2017-02-09 Thread Jeff Muizelaar
It's not very easy to reason about overflow issues with our current
representation. This means that we currently just pretend that they
don't happen. The idea for changing the representation came up in
response to a security bug where we didn't really have a better
solution.

Changing to x1, x2, y1, y2 will allow us to match the pixman rectangle
representation that we use for regions. The region code shows quite a
bit profiles so even a small improvement in this area is nice to have.

-Jeff

On Wed, Feb 8, 2017 at 9:19 PM, David Major  wrote:
> Is there a specific problem that's being solved by this proposal? It would
> be helpful to make this a bit more concrete, like "these benchmarks go x%
> faster", or "here's a list of overflow bugs that will just vanish", or
> "here's some upcoming work that this would facilitate".
>
> On Thu, Feb 9, 2017 at 1:56 PM, Botond Ballo  wrote:
>>
>> Hi everyone!
>>
>> I would like to propose changing the internal representation of
>> rectangles in platform code.
>>
>> We currently represent a rectangle by storing the coordinates of its
>> top-left corner, its width, and its height. I'll refer to this
>> representation as "x/y/w/h".
>>
>> I would like to propose storing instead the coordinates of the
>> top-left corner, and the coordinates of the bottom-right corner. I'll
>> refer to this representation as "x1/y1/x2/y2".
>>
>> The x1/y1/x2/y2 representation has several advantages over x/y/w/h:
>>
>>   - Several operations are more efficient with x1/y1/x2/y2, including
>> intersection,
>> union, and point-in-rect.
>>   - The representation is more symmetric, since it stores two quantities
>> of the
>> same kind (two points) rather than a point and a dimension
>> (width/height).
>>   - The representation is less susceptible to overflow. With x/y/w/h,
>> computation
>> of x2/y2 can overflow for a large range of values of x/y and w/h.
>> However,
>> with x1/y1/x2/y2, computation of w/h cannot overflow if the
>> coordinates are
>> signed and the resulting w/h is unsigned.
>>
>> A known disadvantage of x1/y1/x2/y2 is that translating the rectangle
>> requires translating both points, whereas translating x/y/w/h only
>> requires translating one point. I think this disadvantage is minor in
>> comparison to the above advantages.
>>
>> The proposed change would affect the class mozilla::gfx::BaseRect, and
>> classes that derive from it (such as CSSRect, LayoutRect, etc., and,
>> notably, nsRect and nsIntRect), but NOT other rectangle classes like
>> DOMRect.
>>
>> I would like to propose making the transition as follows:
>>
>>   - Replace direct accesses to the 'width' and 'height' fields throughout
>> the codebase with calls to getter and setter methods. (There aren't
>> that many - on the order of a few dozen, last I checked.)
>>
>>   - Make the representation change, which is non-breaking now that
>> the direct accesses to 'width' and 'height' have been removed.
>>
>>   - Examine remaining calls to the getters and setters for width and
>> height and see if any can be better expressed using operations
>> on the points instead.
>>
>> The Graphics team, which owns this code, is supportive of this change.
>> However, since this is a fundamental utility type that's used by a
>> variety of platform code, I would like to ask the wider platform
>> development community for feedback before proceeding. Please let me
>> know if you have any!
>>
>> Thanks,
>> Botond
>>
>> [1]
>> http://searchfox.org/mozilla-central/rev/672c83ed65da286b68be1d02799c35fdd14d0134/gfx/2d/BaseRect.h#46
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Frames timing functions

2017-02-23 Thread Jeff Muizelaar
The linked bug suggests that Chrome implements this but this email suggests
it doesn't. What's the truth?

-Jeff

On Thu, Feb 23, 2017 at 2:45 AM, Boris Chiou  wrote:

> *Summary*:
> A frames timing function is a type of timing function that divides the
> input time into a specified number of intervals of equal length, each of
> which is associated with an output progress value of increasing value. The
> difference between a frames timing function and a step timing function is
> that a frames timing function returns the output progress value 0 and 1 for
> an equal portion of the input progress value in the range [0, 1]. This
> makes it suitable for using in animation loops where the animation should
> display the first and last frame of the animation for an equal amount of
> times as each other frame during each loop.
>
> *Bug*: https://bugzilla.mozilla.org/show_bug.cgi?id=1248340
>
> *Link to standard*: FPWD:
> https://www.w3.org/TR/css-timing-1/#frames-timing-functions
>
> *Platform coverage*: All platform.
>
> *Estimated or target release*: Not yet determined.
>
> *Preference behind which this will be implemented*: I'm not sure. I think
> we don't need it because it is just a variant of the step timing function,
> and so it is safe to turn it on. If there is any other concerns, I can add
> a preference for this.
>
> *DevTools bug*: Not sure.
>
> *Do other browser engines implement this?* No
>
> *Tests* - web-platform/tests/timing-functions/frames-timing-functions
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSS 'transform-box' property

2017-03-01 Thread Jeff Muizelaar
What is the status of this property in other browsers?

-Jeff

On Wed, Mar 1, 2017 at 4:25 PM, Jonathan Watt  wrote:

> In bug 1208550[1] we plan to allow the 'transform-box' property[2] to ride
> the
> trains to release.
>
> Summary: This property solves a common SVG authoring request - allowing
> transforms in SVG to be relative to an element's bounds (to rotate around
> its
> center, for example) - but in a more consistent way to Chrome/Safari's
> current,
> sometimes confusing behavior.
>
> When considering 'transform' and 'transform-origin' authors may want to
> change:
>
>  1) what percentage values in translations in 'transform' and what
> percentage values in 'transform-origin' resolve against
>
>  2) what 'transform-origin' is relative to
>
> Right now Blink/Webkit resolve percentage values in
> 'transform'/'transform-origin' in SVG against the elements bounding box.
> What
> 'transform-origin' is relative to depends on what type of value the author
> specified. Percentage values in 'transform-origin' specify a position
> relative
> to the top left corner of the element, whereas absolute values specify a
> point
> relative to the origin of the element's current user space (necessary for
> backwards compatibility with most SVG). In other words 'transform-origin'
> is a
> little bit magical and unfortunately |transform-origin: 0% 0%| and
> |transform-origin: 0 0| will usually specify completely different points.
> The
> idea behind this behavior is that it should just "do the right thing" most
> of
> the time, but it can trip authors up.
>
> Mozilla doesn't have this problem since it always resolves percentage
> values in
> SVG against the nearest SVG viewport's dimensions, and 'transform-origin'
> is
> always relative to the origin of the element's current user space.
>
> Edge doesn't yet implement percentage values in SVG.
>
> Blink/Webkit and Mozilla's different approaches address different
> authoring use
> cases. The former's approach allows authors to transform relative to the
> bounds
> of an element in SVG, similar to a user's expectations if they're used to
> CSS
> transforms in HTML. Mozilla's approach allows objects to be laid out
> proportionally to the viewport and avoids the percentage/non-percentage
> 'transform-origin' gotcha.
>
> The new 'transform-box' property will allow content authors to have
> transforms
> behave in whichever of the ways meets their needs, and eliminate the
> magical
> percentage/non-percentage 'transform-origin' behavior. One thing to note
> is that
> what percentages resolve against and what 'transform-origin' is relative
> to is
> tied together. Either percentage values resolve against the bounds of the
> element and 'transform-origin' is relative to the top-left of the element's
> bounds, or else percentage values resolve against the nearest SVG viewport
> and
> 'transform-origin' is relative to the origin of the element's current user
> space.
>
> -Jonathan
>
> 1. https://bugzilla.mozilla.org/show_bug.cgi?id=1208550
> 2. http://dev.w3.org/csswg/css-transforms/#propdef-transform-box
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread Jeff Muizelaar
Perhaps you need a faster computer(s). Are you building on Windows?
With icecream on Linux I can do a full clobber build in ~5 minutes.

-Jeff

On Tue, Mar 7, 2017 at 1:59 PM,   wrote:
> I'm on Linux (Arch), with ccache, and I work on mozilla-central, rebasing my 
> bookmarks on top of central every couple days.
>
> And every couple days the recompilation takes 50-65 minutes.
>
> Here's my mozconfig:
> ▶ cat mozconfig
> mk_add_options MOZ_MAKE_FLAGS="-j4"
> mk_add_options AUTOCLOBBER=1
> ac_add_options --with-ccache=/usr/bin/ccache
> ac_add_options --enable-optimize="-g -Og"
> ac_add_options --enable-debug-symbols
> ac_add_options --enable-debug
>
> Here's my ccache:
> ▶ ccache -s
> cache directory /home/zbraniecki/.ccache
> primary config  /home/zbraniecki/.ccache/ccache.conf
> secondary config  (readonly)/etc/ccache.conf
> cache hit (direct) 23811
> cache hit (preprocessed)3449
> cache miss 25352
> cache hit rate 51.81 %
> called for link 2081
> called for preprocessing 495
> compile failed   388
> preprocessor error   546
> bad compiler arguments 8
> autoconf compile/link   1242
> no input file169
> cleanups performed42
> files in cache 36965
> cache size  20.0 GB
> max cache size  21.5 GB
>
> And all I do is pull -u central, and `./mach build`.
>
> Today I updated from Sunday, it's two days of changes, and my recompilation 
> is taking 60 minutes already.
>
> I'd like to hope that there's some bug in my configuration rather than the 
> nature of things.
>
> Would appreciate any leads,
> zb.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there a way to improve partial compilation times?

2017-03-07 Thread Jeff Muizelaar
On Tue, Mar 7, 2017 at 2:29 PM,   wrote:
> So,
>
> I'm on Dell XPS 13 (9350), and I don't think that toying with MOZ_MAKE_FLAGS 
> will help me here. "-j4" seems to be a bit high and a bit slowing down my 
> work while the compilation is going on, but bearable.
>
> I was just wondering if really two days of patches landing in Gecko should 
> result in what seems like basically full rebuild.

Two days of patches landing requiring basically a full rebuild is not
surprising to me. All it takes is some changes in some frequently
included headers and then basically everything needs to be rebuilt.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please write good commit messages before asking for code review

2017-03-09 Thread Jeff Muizelaar
On Thu, Mar 9, 2017 at 5:43 PM, Ben Kelly  wrote:
> Personally I prefer looking at the bug for the full context and single
> point of truth.  Also, security bugs typically can't have extensive commit
> messages and moving a lot of context to commit messages might paint a
> target on security patches.

The bug being inaccessible already makes collecting a list of security
patches trivial.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Future of out-of-tree spell checkers?

2017-03-22 Thread Jeff Muizelaar
On Wed, Mar 22, 2017 at 11:08 AM, Henri Sivonen  wrote:
>
> dlopening libvoikko, if installed, and having thin C++ glue code
> in-tree seems much simpler, except maybe for sandboxing. What are the
> sandboxing implications of dlopening a shared library that will want
> to load its data files?

My understanding is that the spell checker mostly lives in the Chrome
process so it seems sandboxing won't be a problem.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-23 Thread Jeff Muizelaar
I have a Ryzen 7 1800 X and it does a Windows clobber builds in ~20min
(3 min of that is configure which seems higher than what I've seen on
other machines). This compares pretty favorably to the Lenovo p710
machines that people are getting which do 18min clobber builds and
cost more than twice the price.

-Jeff

On Thu, Mar 23, 2017 at 7:51 PM, Jeff Gilbert  wrote:
> They're basically out of stock now, but if you can find them, old
> refurbished 2x Intel Xeon E5-2670 (2.6GHz Eight Core) machines were
> bottoming out under $1000/ea. It happily does GCC builds in 8m, and I
> have clang builds down to 5.5. As the v2s leave warranty, similar
> machines may hit the market again.
>
> I'm interested to find out how the new Ryzen chips do. It should fit
> their niche well. I have one at home now, so I'll test when I get a
> chance.
>
> On Wed, Jul 6, 2016 at 12:06 PM, Trevor Saunders
>  wrote:
>> On Tue, Jul 05, 2016 at 04:42:09PM -0700, Gregory Szorc wrote:
>>> On Tue, Jul 5, 2016 at 3:58 PM, Ralph Giles  wrote:
>>>
>>> > On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:
>>> >
>>> > > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
>>> >
>>> > 24s here. So faster link times and significantly faster clobber times. I'm
>>> > sold!
>>> >
>>> > Any motherboard recommendations? If we want developers to use machines
>>> > like this, maintaining a current config in ServiceNow would probably
>>> > help.
>>>
>>>
>>> Until the ServiceNow catalog is updated...
>>>
>>> The Lenovo ThinkStation P710 is a good starting point (
>>> http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/).
>>> From the default config:
>>>
>>> * Choose a 2 x E5-2637v4 or a 2 x E5-2643v4
>>> * Select at least 4 x 8 GB ECC memory sticks (for at least 32 GB)
>>> * Under "Non-RAID Hard Drives" select whatever works for you. I recommend a
>>> 512 GB SSD as the primary HD. Throw in more drives if you need them.
>>>
>>> Should be ~$4400 for the 2xE5-2637v4 and ~$5600 for the 2xE5-2643v4
>>> (plus/minus a few hundred depending on configuration specific).
>>>
>>> FWIW, I priced out similar specs for a HP Z640 and the markup on the CPUs
>>> is absurd (costs >$2000 more when fully configured). Lenovo's
>>> markup/pricing seems reasonable by comparison. Although I'm sure someone
>>> somewhere will sell the same thing for cheaper.
>>>
>>> If you don't need the dual socket Xeons, go for an i7-6700K at the least. I
>>> got the
>>> http://store.hp.com/us/en/pdp/cto-dynamic-kits--1/hp-envy-750se-windows-7-desktop-p5q80av-aba-1
>>> a few months ago and like it. At ~$1500 for an i7-6700K, 32 GB RAM, and a
>>> 512 GB SSD, the price was very reasonable compared to similar
>>> configurations at Dell, HP, others.
>>>
>>> The just-released Broadwell-E processors with 6-10 cores are also nice
>>> (i7-6850K, i7-6900K). Although I haven't yet priced any of these out so I
>>> have no links to share. They should be <$2600 fully configured. That's a
>>> good price point between the i7-6700K and a dual socket Xeon. Although if
>>> you do lots of C++ compiling, you should get the dual socket Xeons (unless
>>> you have access to more cores in an office or a remote machine).
>>
>>  The other week I built a machine with a 6800k, 32gb of ram, and a 2 tb
>>  hdd for $1525 cad so probably just under $1000 usd.  With just that
>>  machine I can do a 10 minute linux debug build.  For less than the
>>  price of the e3 machine quoted above I can buy 4 of those machines
>>  which I expect would produce build times under 5:00.
>>
>> I believe with 32gb of ram there's enough fs cache disk performance
>> doesn't actually matter, but it might be worth investigating moving a
>> ssd to that machine at some point.
>>
>> So I would tend to conclude Xeons are not a great deal unless you really
>> need to build for windows a lot before someone gets icecc working there.
>>
>> Trev
>>
>>> If you buy a machine today, watch out for Windows 7. The free Windows 10
>>> upgrade from Microsoft is ending soon. Try to get a Windows 10 Pro license
>>> out of the box. And, yes, you should use Windows 10 as your primary OS
>>> because that's what our users mostly use. I run Hyper-V under Windows 10
>>> and have at least 1 Linux VM running at all times. With 32 GB in the
>>> system, there's plenty of RAM to go around and Linux performance under the
>>> VM is excellent. It feels like I'm dual booting without the rebooting part.
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list

Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-23 Thread Jeff Muizelaar
On Thu, Mar 23, 2017 at 11:42 PM, Robert O'Callahan
 wrote:
> On Fri, Mar 24, 2017 at 1:12 PM, Ehsan Akhgari  
> wrote:
>> On Thu, Mar 23, 2017 at 7:51 PM, Jeff Gilbert  wrote:
>>
>>> I'm interested to find out how the new Ryzen chips do. It should fit
>>> their niche well. I have one at home now, so I'll test when I get a
>>> chance.
>>>
>>
>> Ryzen currently on Linux implies no rr, so beware of that.
>
> A contributor almost got Piledriver working with rr, but that was
> based on "LWP" features that apparently are not in Ryzen. If anyone
> finds any detailed documentation of the hardware performance counters
> in Ryzen, let us know! All I can find is PR material.

I have NDA access to at least some of the Ryzen documentation and I
haven't been able to find anything more on the performance counters
other than:

AMD64 Architecture Programmer’s Manual
Volume 2: System Programming
3.27 December 2016

This document is already publicly available.

I also have one of the chips so I can test code. If there are specific
questions I can also forward them through our AMD contacts.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tier 3 win64 ASan builds on try

2017-04-06 Thread Jeff Muizelaar
Glorious. Thanks to everyone who made this happen.

-Jeff

On Thu, Apr 6, 2017 at 10:11 PM, Ting-Yu Chou  wrote:
> Just a heads up that now we have win64 ASan builds on try. The try format:
>
>   try: -b o -p win64-asan -u none -t none
>
> Bug 1347793 is tracking the failed tests on taskcluster, though by now the
> tests
> for normal windows builds are not all green yet and I haven't seen any real
> ASan
> issue.
>
> I am not sure if js/fuzzing folks are interested, but probably we can try
> to do
> fuzzing for jsshell-asan, on windows. (Wonder do we have automated fuzzing
> tests
> on any infrastructure.)
>
> Note this is still in tier 3, things can regress easily.
>
> Ting
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Quantum Flow Engineering Newsletter #4

2017-04-07 Thread Jeff Muizelaar
We also got rid of some needless work that was happening every refresh
driver tick. This should help cpu usage during the throbber spinning
above and generally gives the main thread of the parent process more
time to do useful things during animation.


-Jeff

On Fri, Apr 7, 2017 at 12:11 PM, Ehsan Akhgari  wrote:
> Hi everyone,
>
> As promised (with a day of delay), here is an update on what happened in
> the last two weeks on making Firefox faster as part of the Quantum Flow
> project.
>
> Last week we had a big work week at the Mozilla Toronto office.  Many
> members of the various teams were attending and the week was packed with a
> lot of planning around the performance issues that have been identified in
> each area so far, and what we are planning to do in each area for Firefox
> 57 and beyond.  I tried to attend as many of the discussions as I could,
> but of course many of the discussions were happening concurrently so I'm
> sure a lot of details is going to be missing, but here is a super high
> level of some of the plans that were being discussed.
>
>
>- DOM.  In the DOM team there are several plans and projects under way
>which will hopefully bring various performance improvements to the
>browser.  Probably the largest one is the upcoming plans for cooperative
>scheduling of tasks, which will allow us to interrupt currently executing
>JavaScript on background pages in order to service tasks belonging to
>foreground pages.  You may have seen patches landing as part of a large
>effort to label all of our runnables
>.  This is needed
>so that we can identify how to schedule tasks cooperatively.  We are
>planning to also soon do some work on throttling down timeouts running in
>background pages more aggressively.  More details will be announced about
>all of these projects very soon.  Furthermore we are working on smaller
>scale performance improvements in various parts of the DOM module as new
>performance issues are discovered through various benchmarks.
>- JavaScript.  In the JavaScript team there have been several streams of
>work ongoing to work on various improvements to the various aspects of our
>JS execution.  Jan de Mooij and colleagues have been running the CacheIR
>project  for a
>while as an attempt to share our inline caches (ICs) between the baseline
>and Ion JIT layers.  This helps with unifying the cases that can be
>optimized in these JIT layers and has been showing meaningful improvements
>both on real web pages and benchmarks such as Speedometer.  They have also
>been looking at various opportunistic optimizations that also help
>performance issues we have identified through profiling as well.  Another
>line of investigation in the JS team for a while has been looking into this
>bug .  We have
>some evidence to suggest that our JIT generated code isn't very efficient
>in terms of the CPU instruction cache usage, but so far that investigation
>hasn't resulted in anything super conclusive.  Another extensive discussion
>topic was GC scheduling.  Right now the way that our GC (and cycle
>collection) scheduling works is pretty dis-coordinated between SpiderMonkey
>and Gecko, and this can result in pathological cases where for example
>SpiderMonkey sometimes doesn't know that a specific time is an unfortunate
>time to run a long running GC, and Gecko doesn't have a good way to ask
>SpiderMonkey to stop an ongoing GC if it detects that now would be a good
>time to do something else, etc.  We're going to start to improve this
>situation by coordinating the scheduling between these two parts of the
>browser.  This is one of those architectural changes that can have a pretty
>big impact also in the longer term as we find more ways to leverage better
>coordination.  Another topic that was discussed was improving the
>performance of our XRay wrappers
> that provide
>chrome JS code access to content JS objects.  This is important for some
>front-end code, and also for the performance of some Web Extensions.
>- Layout.  In the Layout team, we are focusing on improving our reflow
>performance .
>One challenge that we have in this area is finding which reflow issues are
>the important ones.  We have done some profiling and measurement and we
>have identified some issues so far, and we can definitely find more issues,
>but it's very hard to know how much optimization is enough, which ones are
>the important ones, and whether we know of the important problems.  The
>nat

Re: new configure option: --enable-debug-rust

2017-05-11 Thread Jeff Muizelaar
On Fri, Apr 14, 2017 at 10:46 AM, Nathan Froyd  wrote:
> With these options, you get a browser that runs quickly (i.e. no DEBUG
> assertions in C++ code), but still lets you debug the Rust code you
> might be working on, ideally with faster compile times than you might
> get otherwise.  --enable-debug implies --enable-debug-rust, of course.

>From my reading of config/rules.mk and experience it looks like
--enable-rust-debug does not disable optimizations in Rust code. With
opt-level=1 Rust still doesn't have a great debugging experience (the
compiler mostly seems to think things are optimized out).

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is it OK to make allocations that intentionally aren't freed? (was: Re: Is Big5 form submission fast enough?)

2017-05-19 Thread Jeff Muizelaar
We use functions like cairo_debug_reset_static_data() on shutdown to
handle cases like this.

-Jeff

On Fri, May 19, 2017 at 1:44 AM, Henri Sivonen  wrote:
> On Tue, May 16, 2017 at 7:03 AM, Tim Guan-tin Chien
>  wrote:
>> According to Alexa top 100 Taiwan sites and quick spot checks, I can only
>> see the following two sites encoded in Big5:
>>
>> http://www.ruten.com.tw/
>> https://www.momoshop.com.tw/
>>
>> Both are shopping sites (eBay-like and Amazon-like) so you get the idea how
>> forms are used there.
>
> Thank you. It seems to me that encoder performance doesn't really
> matter for sites like these, since the number of characters one would
> enter in the search field at a time is very small.
>
>> Mike reminded me to check the Tax filing website: http://www.tax.nat.gov.tw/
>> .Yes, it's unfortunately also in Big5.
>
> I guess I'm not going to try filing taxes there for testing. :-)
>
> - -
>
> One option I've been thinking about is computing an encode
> acceleration table for JIS X 0208 on the first attempt to encode a CJK
> Unified Ideograph in any of Shift_JIS, EUC-JP or ISO-2022-JP, for GBK
> on the first attempt to encode a CJK Unified Ideograph in either GBK
> or gb18030, and for Big5 on the first attempt to encode a CJK Unified
> Ideograph in Big5.
>
> Each of the three tables would then remain allocated through to the
> termination of the process.
>
> This would have the advantage of not bloating our binary footprint
> with data that can be computed from other data in the binary while
> still making legacy Chinese and Japanese encode fast without a setup
> cost for each encoder instance.
>
> The downsides would be that the memory for the tables wouldn't be
> reclaimed if the tables aren't needed anymore (the browser can't
> predict the future) and executions where any of the tables has been
> created wouldn't be valgrind-clean. Also, in the multi-process world,
> the tables would be recomputed per-process. OTOH, if we shut down
> rendered processes from time to time, it would work as a coarse
> mechanism to reclaim the memory is case Japanese or Chinese legacy
> encode is a relatively isolated event in the user's browsing pattern.
>
> Creating a mechanism for the encoding library to become aware of
> application shutdown just in order to be valgrind-clean would be
> messy, though. (Currently, we have shutdown bugs where uconv gets used
> after we've told it can shut down. I'd really want to avoid
> re-introducing that class of bugs with encoding_rs.)
>
> Is it OK to create allocations that are intentionally never freed
> (i.e. process termination is what "frees" them)? Is valgrind's message
> suppression mechanism granular enough to suppress three allocations
> from a particular Rust crate statically linked into libxul?
>
> --
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Profiling nightlies on Mac - what tools are used?

2017-06-19 Thread Jeff Muizelaar
Yes. I use Instruments on Nightly builds extensively. It would really
be a loss to lose this functionality. I think it's important to weigh
the performance improvements that we get from easy profiling against
any advantage we get from stripping the symbols.

-Jeff

On Mon, Jun 19, 2017 at 6:07 PM, Bobby Holley  wrote:
> Instruments is the big one that I'm aware of.
>
> On Mon, Jun 19, 2017 at 3:03 PM, Chris Cooper  wrote:
>
>> Hey all,
>>
>> The build peers are looking to change the way that nightlies are created
>> on Mac as we switch to cross-compilation. Specifically, we're looking at
>> stripping the nightlies to avoid an as-of-yet undiagnosed performance
>> discrepancy vs native builds[1], but also to make the nightly configuration
>> match what we ship on beta/release (stripped).
>>
>> Of course, stripping removes the symbols, and while we believe we have a
>> solution for re-acquiring symbols that works for the Gecko Profiler, we
>> realize
>> that people out there may be using other profiling tools.
>>
>> If you profile on Mac, now is your chance to speak up. What other
>> profiling tools do you use that we should be aware of?
>>
>> cheers,
>> --
>> coop
>>
>> 1. https://bugzilla.mozilla.org/show_bug.cgi?id=1338651
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Profiling nightlies on Mac - what tools are used?

2017-06-20 Thread Jeff Muizelaar
Very much so yes. Even if having unstripped builds were universally
slower (they only seem to be only slower on the ci machines) any
performance impact is likely to not impact the distribution of samples
substantially.

On Tue, Jun 20, 2017 at 2:09 PM, Chris Peterson  wrote:
> On 6/20/17 10:28 AM, Ehsan Akhgari wrote:
>>
>> That seems like the obvious next step to investigate to me.  We should
>> *really* only talk about stripping builds as the last resort IMO, since we
>> have way too many developers using OSX every day...
>
>
> Does profiling an unstripped Mac build still produce useful results if the
> unstripped builds are slower than the stripped builds we ship to users?
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OS/2 still supported ?

2017-07-25 Thread Jeff Muizelaar
On Tue, Jul 25, 2017 at 4:04 AM, Enrico Weigelt, metux IT consult
 wrote:
> On 25.07.2017 02:04, Kris Maglione wrote:
>
>> The only remaining in-tree references to the XP_OS2 macros are in NSPR
>> and NSS, which are technically separate projects, and have their own
>> sets of supported platforms.
>
>
> In esr52 there's a bit more:
>
> gfx/2d/DrawTargetCairo.cpp
> gfx/cairo/cairo/src/cairo-features.h.in
> gfx/cairo/cairo/src/cairo-mutex-impl-private.h
> gfx/cairo/cairo/src/cairo-os2-private.h
> gfx/cairo/cairo/src/cairo-os2-surface.c
> gfx/cairo/cairo/src/cairo-os2.h
> gfx/cairo/cairo/src/cairo.h

The cairo stuff is from an upstream project and not worth removing.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OS/2 still supported ?

2017-07-25 Thread Jeff Muizelaar
On Tue, Jul 25, 2017 at 11:25 PM, Steve Wendt  wrote:
> On 7/25/2017 7:28 AM, Jeff Muizelaar wrote:
>
>>>> The only remaining in-tree references to the XP_OS2 macros are in
>>>> NSPR and NSS, which are technically separate projects, and have
>>>> their own sets of supported platforms.
>>
>>
>> The cairo stuff is from an upstream project and not worth removing.
>
>
> Likewise for libvpx and libffi?

Yes.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-08-08 Thread Jeff Muizelaar
On Mon, Aug 7, 2017 at 6:12 PM, Mike Hommey  wrote:
>   Note that the tp5n main_startup_fileio reflects the resulting size of
>   xul.dll, which also impacts the installer size:
>  32-bits   64-bits
>   MSVC (PGO):   37904383  40803170
>   clang-cl: 39537860  40561849
>   clang-cl -O2: 41976097  43338891

FWIW, https://bugs.llvm.org//show_bug.cgi?id=26299 is the metabug for
tracking improvements to x86-32 code size.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox and clang-cl

2017-08-12 Thread Jeff Muizelaar
On Sat, Aug 12, 2017 at 9:40 PM, Ehsan Akhgari  wrote:
> Last but not least, you may ask yourself why would we want to spend this
> much effort to switch to clang-cl on Windows?  I believe this is an
> important long term shift that is beneficial for us.  First and foremost,
> clang is a vibrant open source compiler, and being able to use open source
> toolchains on our most important platforms is really important for us in
> terms of being able to contribute to the compiler where needed

It's worth emphasizing the value of using an open source compiler.
Being able to find and fix bugs in the compiler instead of having to
work around them without knowing the true cause is enormously
valuable. A recent example of this happened to me yesterday with
https://bugzilla.mozilla.org/show_bug.cgi?id=1382857. Once I had
reported the issue (https://bugs.llvm.org/show_bug.cgi?id=34163) a fix
was committed to clang trunk in less than 6 hours. That's something
not ever possible with MSVC.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linking with lld instead of ld/gold

2017-08-14 Thread Jeff Muizelaar
I believe all three linkers (bfd, gold and lld) can currently do LTO
on LLVM bitcode. Naively I'd assume getting cross-compilation-unit
optimization combining rust and clang compile units is more of a build
system issue than a linker one.

-Jeff

On Mon, Aug 14, 2017 at 2:16 AM, Henri Sivonen  wrote:
> On Mon, Aug 14, 2017 at 12:08 AM, Sylvestre Ledru  
> wrote:
>> Thanks to bug https://bugzilla.mozilla.org/show_bug.cgi?id=1336978, it
>> is now possible to link with LLD (the linker from the LLVM toolchain)
>> on Linux instead of bfd or gold.
>
> Great news. Thank you!
>
> Does this enable lld to ingest object files that contain LLVM bitcode
> instead of target machine code and to perform cross-compilation-unit
> optimization? How far are we from cross-compilation-unit optimization
> when some compilation units come from clang and some from rustc?
>
> --
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Canonical cinnabar repository

2017-09-18 Thread Jeff Muizelaar
I agree having a canonical version would be very valuable. In the mean
time if you want to avoid having to do the entire conversion locally
you can start by cloning the cinnabar branch of
https://github.com/jrmuizel/gecko-cinnabar which is a local full
conversion that I painfully uploaded to github.

-Jeff

On Mon, Sep 18, 2017 at 11:04 AM, Myk Melez  wrote:
>> Kartikaya Gupta 
>> 2017 September 18 at 07:05
>> It seems to me that a lot of people are now assuming a cinnabar repo
>> is the canonical way for git users to develop on mozilla-central. If
>> we want to make this mozilla policy I don't really have objections,
>> but I think that if we do that, we should maintain a canonical git
>> repo that is built using cinnabar, rather than having everybody have
>> their own "grafted" version of a cinnabar repo. The problem with the
>> latter approach is that different people will have different SHAs for
>> the same upstream commit, thus making it much harder to share repos.
>
> Note that there's a third option, which is for everyone to have their own
> non-grafted version of a cinnabar repo. If you clone mozilla-central using
> cinnabar, instead of grafting commits onto a gecko-dev clone, then that's
> what you get, since cinnabar revision ID conversion is deterministic (as I
> understand it, anyway).
>
> Having said that, I agree that it's worth enabling developers to clone a
> canonical Git repo. I've been syncing mozilla/gecko using cinnabar for a
> while to experiment with ways of doing this. There've also been
> conversations about syncing new commits to mozilla/gecko-dev with cinnabar
> for a few years, although I don't know of any active efforts to do this.
>
> -myk
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Canonical cinnabar repository

2017-09-18 Thread Jeff Muizelaar
FWIW, https://github.com/jrmuizel/gecko-cinnabar doesn't have the CVS
history so is no better than https://github.com/mozilla/gecko. Having
a canonical repo that includes the CVS history will make the SHA's
incompatible with doing a direct conversion of hg which is a
disadvantage. I'm not sure what's more valuable.

-Jeff

On Mon, Sep 18, 2017 at 2:21 PM, Ehsan Akhgari  wrote:
> On 09/18/2017 01:16 PM, Bobby Holley wrote:
>>
>> On Mon, Sep 18, 2017 at 8:25 AM, Andrew McCreight 
>> wrote:
>>
>>> On Mon, Sep 18, 2017 at 7:05 AM, Kartikaya Gupta 
>>> wrote:
>>>
 I've tried using cinnabar a couple of times now and the last time I
 tried, this was the dealbreaker for me. My worfklow often involves
 moving a branch from one machine to another and the extra hassle that
 results from mismatched SHAs makes it much more complicated than it
 needs to be. gecko-dev doesn't have this problem as it has a canonical
 upstream that works much more like a regular git user expects.

>>> For what it is worth, I regularly pull from one machine to another with
>>> git-cinnabar, and it works just fine without any problems from mismatched
>>> SHAs. For me, the switch from a clone of gecko-dev to git-cinnabar has
>>> been
>>> totally transparent.
>>>
>> +1. The non-stable SHA problem was solved a long time ago. Same goes for
>> any big performance issues. In my experience, cinnabar is pretty darn
>> transparent.
>>
>> https://github.com/mozilla/gecko is effectively the canonical repo people
>> are talking about. I sometimes pull that, but git-cinnabar is fast enough
>> that it works fine to just clone the hg repo directly. If it weren't for
>> the occasional annoyance of mapping commits between local revs and hg.m.o
>> links, I would basically forget that the core infrastructure is running
>> hg.
>
> That repo doesn't have the CVS history.  :-(  I realize that is fixable with
> a local graft and a clone of gecko-dev, but a lot of blood and sweat went
> into making our current canonical git repo include the full CVS history (I
> maintained it myself for ~3 years and a lot of people spent quite a bit of
> time and energy to stand up the current infrastructure that maintains
> gecko-dev.)  Would it be possible to base the canonical git-cinnabar repo on
> https://github.com/jrmuizel/gecko-cinnhabar which does have the full CVS
> history?
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Canonical cinnabar repository

2017-09-19 Thread Jeff Muizelaar
On Mon, Sep 18, 2017 at 5:02 PM, Ehsan Akhgari  wrote:
> On 09/18/2017 03:30 PM, Bobby Holley wrote:
>>
>> CVS history feels like an odd bar for cinnabar. The goal of cinnabar is to
>> enable seamless integration between git and mercurial with reproducible, 1:1
>> commit mappings. Our canonical mercurial repositories don't have CVS
>> history, so we shouldn't expect the cinnabar clones of those repositories to
>> have CVS history either.
>
> FWIW the question here is moving from a canonical repository with CVS
> history to one without.

I can't think of a reason we can't have a repo with cvs history and one without.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Canonical cinnabar repository

2017-09-20 Thread Jeff Muizelaar
I would recommend
https://github.com/glandium/git-cinnabar/wiki/Mozilla:-A-git-workflow-for-Gecko-development.

The other places should probably be updated to point at that.

-Jeff

On Wed, Sep 20, 2017 at 12:57 PM, Ethan Glasser-Camp
 wrote:
> Sorry if this is a bit off-topic. It seems from these threads that there is
> a more-or-less canonical way to use git to hack on Firefox. Where can I
> find out more about it?
>
> Looking online, the only information I could find was at
> https://github.com/glandium/git-cinnabar/wiki/Mozilla:-A-git-workflow-for-Gecko-development.
> Is that the best source of information? I didn't see anything under
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide,
> http://mozilla-version-control-tools.readthedocs.io/en/latest/, or
> https://firefox-source-docs.mozilla.org/.
>
> Thanks!
>
> Ethan
>
>
> On Mon, Sep 18, 2017 at 10:05 AM, Kartikaya Gupta 
> wrote:
>
>> This message was inspired by the `mach try` thread but is off-topic
>> there so I think deserves its own thread.
>>
>> It seems to me that a lot of people are now assuming a cinnabar repo
>> is the canonical way for git users to develop on mozilla-central. If
>> we want to make this mozilla policy I don't really have objections,
>> but I think that if we do that, we should maintain a canonical git
>> repo that is built using cinnabar, rather than having everybody have
>> their own "grafted" version of a cinnabar repo. The problem with the
>> latter approach is that different people will have different SHAs for
>> the same upstream commit, thus making it much harder to share repos.
>>
>> I've tried using cinnabar a couple of times now and the last time I
>> tried, this was the dealbreaker for me. My worfklow often involves
>> moving a branch from one machine to another and the extra hassle that
>> results from mismatched SHAs makes it much more complicated than it
>> needs to be. gecko-dev doesn't have this problem as it has a canonical
>> upstream that works much more like a regular git user expects.
>>
>> As an aside, I also think that the cinnabar workflow as it exists now
>> actually demotes git to more of a "second-class citizen".
>> Conceptually, if you're using gecko-dev, everything works exactly as a
>> git user would expect, and only when you need to push to official
>> mozilla hg repos do you need to overcome the vcs translation hurdle
>> (which things like moz-git-tools help with). However if you use
>> cinnabar the vcs translation is more woven into your everyday git
>> commands (e.g. git pull) and you need to be more consciously aware of
>> it. This makes it harder to use whatever your normal git workflow is,
>> which is why I claim it demotes git to second-class. It would be great
>> if we could come up with a way to avoid this but honestly since I
>> haven't used a cinnabar workflow for any significant period of time I
>> haven't given much thought as to how to go about doing this.
>>
>> Discussion welcome!
>>
>> Cheers,
>> kats
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Threadsafe URLs - MozURL

2017-10-23 Thread Jeff Muizelaar
For the curious among us, what made nsIURI not thread safe in the first place?

-Jeff

On Mon, Oct 23, 2017 at 10:01 AM, Valentin Gosu  wrote:
> Hi everyone,
>
> Threadsafe URLs have been high on everybody's wishlist for a long while.
> The fact that our nsIURI implementations weren't thread safe meant that
> hacks had to be used to use a URI off the main thread, such as saving it as
> a string, or bouncing back to the main thread whenever you had to use the
> URI in any way.
>
> A few weeks ago we landed MozURL. This is an immutable threadsafe wrapper
> for rust-url. While it's not yet ready to fully replace our existing URL
> implementations, it's good enough to avoid using the hacks I just mentioned.
>
> For examples of how to use it go to the header file [1] or the gtests [2]
>
> Work is also under way to provide a threadsafe implementation of nsIURI
> that we eventually hope to replace our other URI parsers, and to improve
> the rust-url parser to be faster than our current nsStandardURL
> implementation [3].
>
> [1] http://searchfox.org/mozilla-central/source/netwerk/base/MozURL.h
> [2]
> http://searchfox.org/mozilla-central/source/netwerk/test/gtest/TestMozURL.cpp
> [3] https://bugzilla.mozilla.org/show_bug.cgi?id=1394906#c2
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Jeff Muizelaar
Yeah. I'd suggest anyone who's running Linux on these machines just go
out and buy a $100 AMD GPU to replace the Quadro. Even if you don't
expense the new GPU and just throw the Quadro in the trash you'll
probably be happier.

-Jeff

On Thu, Oct 26, 2017 at 9:34 AM, Henri Sivonen  wrote:
> On Thu, Oct 26, 2017 at 9:15 AM, Henri Sivonen  wrote:
>> There's a huge downside, though:
>> If the screen stops consuming the DisplayPort data stream, the
>> graphical session gets killed! So if you do normal things like turn
>> the screen off or switch input on a multi-input screen, your graphical
>> session is no longer there when you come back and you get a login
>> screen instead! (I haven't yet formed an opinion on whether this
>> behavior can be lived with or not.)
>
> And the downsides don't even end there. rr didn't work. Plus other
> stuff not worth mentioning here.
>
> I guess going back to 16.04.1 is a better deal than 17.10.
>
>> P.S. It would be good for productivity if Mozilla issued slightly less
>> cutting-edge Nvidia GPUs to developers to increase the probability
>> that support in nouveau has had time to bake.
>
> This Mozilla-issued Quadro M2000 has been a very significant harm to
> my productivity. Considering how good rr is, I think it makes sense to
> continue to run Linux to develop Firefox. However, I think it doesn't
> make sense to issue fancy cutting-edge Nvidia GPUs to developers who
> aren't specifically working on Nvidia-specific bugs and, instead, it
> would make sense to issue GPUs that are boring as possible in terms of
> Linux driver support (i.e. Just Works with distro-bundled Free
> Software drivers). Going forward, perhaps Mozilla could issue AMD GPUs
> with computers that don't have Intel GPUs?
>
> As for the computer at hand, I want to put an end to this Nvidia
> obstacle to getting stuff done. It's been suggested to me that Radeon
> RX 560 would be well supported by distro-provided drivers, but the
> "*2" footnote at https://help.ubuntu.com/community/AMDGPU-Driver
> doesn't look too good. Based on that table it seems one should get
> Radeon RX 460. Is this the correct conclusion? Does Radeon RX 460 Just
> Work with Ubuntu 16.04? Is Radeon RX 460 going to be
> WebRender-compatible?
>
> --
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Default Rust optimization level decreased from 2 to 1

2017-10-26 Thread Jeff Muizelaar
FWIW, WebRender becomes unusable opt-level=1. It also looks like style
performance takes quite a hit as well which means that our default
developer builds become unusable for performance work. I worry that
people will forget this and end up rediscovering only when they look
at profiles (as mstange just did). What's the use case for a
--enable-optimize, opt-level=1 build?

-Jeff

On Wed, Oct 25, 2017 at 1:34 PM, Gregory Szorc  wrote:
> Compiling Rust code with optimizations is significantly slower than
> compiling without optimizations. As was measured in bug 1411081, the
> difference between rustc's -Copt-level 1 and 2 on an i7-6700K (4+4 cores)
> for a recent revision of mozilla-central was 325s/802s wall/CPU versus
> 625s/1282s. This made Rust compilation during Firefox builds stand out as a
> long pole and significantly slowed down builds.
>
> Because we couldn't justify the benefits of level 2 for the build time
> overhead it added, we've changed the build system default so Rust is
> compiled with -Copt-level=1 (instead of 2).
>
> Adding --enable-release to your mozconfig (the configuration for builds we
> ship to users) enables -Copt-level=2. (i.e. we didn't change optimization
> settings for builds we ship to users.)
>
> Adding --disable-optimize sets to -Copt-level=0. (This behavior is
> unchanged.)
>
> If you want explicit control over -Copt-level, you can `export
> RUSTC_OPT_LEVEL=` in your mozconfig and that value will always be
> used. --enable-release implies a number of other changes. So if you just
> want to restore the old build system behavior, set this variable in your
> mozconfig.
>
> Also, due to ongoing work around Rust integration in the build system, it
> is dangerous to rely on manual `cargo` invocations to compile Rust because
> bypassing the build system (not using `mach build`) may not use the same
> set of RUSTFLAGS that direct `cargo` invocations do. Things were mostly in
> sync before. But this change and anticipated future changes will cause more
> drift. If you want the correct behavior, use `mach`.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Default Rust optimization level decreased from 2 to 1

2017-10-26 Thread Jeff Muizelaar
On Thu, Oct 26, 2017 at 3:08 PM, Gregory Szorc  wrote:
> Would it help if we had a separate --enable-optimize-rust (or similar)
> option to control Rust optimizations so we have separate knobs? If we did
> that, --disable-optimize-rust could be opt-level 0 or 1 and
> --enable-optimize-rust could be opt-level=2. The local defaults would
> probably be --enable-optimize/--disable-optimize-rust (mirroring today).

Yeah, that would probably be more user friendly than the environment
variable solution that we have today. Still it's hard to know what the
correct defaults are.

> I'm not sure if it is possible, but per-crate optimization levels might
> help. Although, the data shows that the style crate is one of the slowest to
> compile. And, this crate's optimization is also apparently very important
> for accurate performance testing. That's a really unfortunate conflict to
> have and it would be terrific if we could make the style crate compile
> faster so this conflict goes away. I've filed bug 1412077 to track
> improvements here.

Hopefully the thinlto work that Alex is doing
(https://internals.rust-lang.org/t/help-test-out-thinlto/6017) will
make a difference here.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Jeff Muizelaar
On Thu, Oct 26, 2017 at 7:02 PM, Gregory Szorc  wrote:
> Unless you have requirements that prohibit using a
> VM, I encourage using this setup.

rr doesn't work in hyper-v. AFAIK the only Windows VM it works in is VMWare

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Jeff Muizelaar
On Thu, Oct 26, 2017 at 7:02 PM, Gregory Szorc  wrote:
> I also share your desire to not issue fancy video cards in these machines
> by default. If there are suggestions for a default video card, now is the
> time to make noise :)

Intel GPUs are the best choice if you want to be like bulk of our
users. Otherwise any cheap AMD GPU is going to be good enough.
Probably the number and kind of display outputs are what matters most.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Default Rust optimization level decreased from 2 to 1

2017-10-31 Thread Jeff Muizelaar
As another piece of evidence in support opt-level=1 being the wrong
default, Glenn also got bitten profiling with the wrong options.
https://github.com/servo/webrender/issues/1817#issuecomment-340553613

-Jeff

On Thu, Oct 26, 2017 at 2:51 PM, Jeff Muizelaar  wrote:
> FWIW, WebRender becomes unusable opt-level=1. It also looks like style
> performance takes quite a hit as well which means that our default
> developer builds become unusable for performance work. I worry that
> people will forget this and end up rediscovering only when they look
> at profiles (as mstange just did). What's the use case for a
> --enable-optimize, opt-level=1 build?
>
> -Jeff
>
> On Wed, Oct 25, 2017 at 1:34 PM, Gregory Szorc  wrote:
>> Compiling Rust code with optimizations is significantly slower than
>> compiling without optimizations. As was measured in bug 1411081, the
>> difference between rustc's -Copt-level 1 and 2 on an i7-6700K (4+4 cores)
>> for a recent revision of mozilla-central was 325s/802s wall/CPU versus
>> 625s/1282s. This made Rust compilation during Firefox builds stand out as a
>> long pole and significantly slowed down builds.
>>
>> Because we couldn't justify the benefits of level 2 for the build time
>> overhead it added, we've changed the build system default so Rust is
>> compiled with -Copt-level=1 (instead of 2).
>>
>> Adding --enable-release to your mozconfig (the configuration for builds we
>> ship to users) enables -Copt-level=2. (i.e. we didn't change optimization
>> settings for builds we ship to users.)
>>
>> Adding --disable-optimize sets to -Copt-level=0. (This behavior is
>> unchanged.)
>>
>> If you want explicit control over -Copt-level, you can `export
>> RUSTC_OPT_LEVEL=` in your mozconfig and that value will always be
>> used. --enable-release implies a number of other changes. So if you just
>> want to restore the old build system behavior, set this variable in your
>> mozconfig.
>>
>> Also, due to ongoing work around Rust integration in the build system, it
>> is dangerous to rely on manual `cargo` invocations to compile Rust because
>> bypassing the build system (not using `mach build`) may not use the same
>> set of RUSTFLAGS that direct `cargo` invocations do. Things were mostly in
>> sync before. But this change and anticipated future changes will cause more
>> drift. If you want the correct behavior, use `mach`.
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Default Rust optimization level decreased from 2 to 1

2017-10-31 Thread Jeff Muizelaar
On Tue, Oct 31, 2017 at 3:21 PM, Gregory Szorc  wrote:
> On Tue, Oct 31, 2017 at 12:02 PM, Jeff Muizelaar 
> wrote:
>>
>> As another piece of evidence in support opt-level=1 being the wrong
>> default, Glenn also got bitten profiling with the wrong options.
>> https://github.com/servo/webrender/issues/1817#issuecomment-340553613
>
>
> It is "wrong" for the set of "people performing profiling." This set is
> different from "people compiling Gecko." Which is different from "people who
> actually need to compile Gecko." What I'm trying is the new default is "not
> wrong" for a large set of people who aren't "people performing profiling."

I say this a bit tongue-in-cheek, but given our big performance push,
hopefully the set of people who aren't profiling is not that large.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-07 Thread Jeff Muizelaar
On Mon, Nov 6, 2017 at 1:32 PM, Sophana "Soap" Aik  wrote:
> Hi All,
>
> I'm in the middle of getting another evaluation machine with a 10-core
> W-Series Xeon Processor (that is similar to the 7900X in terms of clock
> speed and performance) but with ECC memory support.
>
> I'm trying to make sure this is a "one size fits all" machine as much as
> possible.

What's the advantage of having a "one size fits all" machine? I
imagine there's quite a range of uses and preferences for these
machines. e.g some people are going to be spending more time waiting
for a single core and so would prefer a smaller core count and higher
clock, other people want a machine that's as wide as possible. Some
people would value performance over correctness and so would likely
not want ECC. etc. I've heard a number of horror stories of people
ending up with hardware that's not well suited to their tasks just
because that was the only hardware on the list.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-07 Thread Jeff Muizelaar
The Core i9s are a quite a bit cheaper than the Xeon Ws:
https://ark.intel.com/products/series/125035/Intel-Xeon-Processor-W-Family vs
https://ark.intel.com/products/126695

I wouldn't want to trade ECC for 4 cores.

-Jeff

On Tue, Nov 7, 2017 at 3:51 PM, Sophana "Soap" Aik  wrote:
> Kris has touched on the many advantages of having a standard model. From
> what I am seeing with most people's use case scenario, only the GPU is what
> will determine what the machine is used for. IE: VR Research team may end up
> only needing a GPU upgrade.
>
> Fortunately the new W-Series Xeon's seem to be equal or better to the Core
> i9's but with ECC support. So there's no sacrifice to performance in single
> threaded or multi-threaded workloads.
>
> With all that said, we'll move forward with the evaluation machine and find
> out for sure in real world testing. :)
>
>
>
> On Tue, Nov 7, 2017 at 12:30 PM, Kris Maglione 
> wrote:
>>
>> On Tue, Nov 07, 2017 at 03:07:55PM -0500, Jeff Muizelaar wrote:
>>>
>>> On Mon, Nov 6, 2017 at 1:32 PM, Sophana "Soap" Aik 
>>> wrote:
>>>>
>>>> Hi All,
>>>>
>>>> I'm in the middle of getting another evaluation machine with a 10-core
>>>> W-Series Xeon Processor (that is similar to the 7900X in terms of clock
>>>> speed and performance) but with ECC memory support.
>>>>
>>>> I'm trying to make sure this is a "one size fits all" machine as much as
>>>> possible.
>>>
>>>
>>> What's the advantage of having a "one size fits all" machine? I
>>> imagine there's quite a range of uses and preferences for these
>>> machines. e.g some people are going to be spending more time waiting
>>> for a single core and so would prefer a smaller core count and higher
>>> clock, other people want a machine that's as wide as possible. Some
>>> people would value performance over correctness and so would likely
>>> not want ECC. etc. I've heard a number of horror stories of people
>>> ending up with hardware that's not well suited to their tasks just
>>> because that was the only hardware on the list.
>>
>>
>> High core count Xeons will divert power from idle cores to increase the
>> clock speed of saturated cores during mostly single-threaded workloads.
>>
>> The advantage of a one-size-fits-all machine is that it means more of us
>> have the same hardware configuration, which means fewer of us running into
>> independent issues, more of us being able to share software configurations
>> that work well, easier purchasing and stocking of upgrades and accessories,
>> ... I own a personal high-end Xeon workstation, and if every developer at
>> the company had to go through the same teething and configuration troubles
>> that I did while breaking it in, we would not be in a good place.
>>
>> And I don't really want to get into the weeds on ECC again, but the
>> performance of load-reduced ECC is quite good, and the additional cost of
>> ECC is very low compared to the cost of developer time over the two years
>> that they're expected to use it.
>
>
>
>
> --
> moz://a
> Sophana "Soap" Aik
> IT Vendor Management Analyst
> IRC/Slack: soap
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Chrome-only WebIDL interfaces no longer require DOM peer review

2018-03-09 Thread Jeff Muizelaar
On Fri, Mar 9, 2018 at 7:21 AM, Ted Mielczarek  wrote:
> On Thu, Mar 8, 2018, at 7:41 PM, Bobby Holley wrote:
>> (C) The API uses complex arguments like promises that XPIDL doesn't handle
>> in a nice way.
>
> I think this is an understated point. WebIDL was designed explicitly to allow 
> expressing the semantics of JS APIs, where XPIDL is some arbitrary set of 
> things designed by folks at Netscape a long time ago. Almost any non-trivial 
> API will wind up being worse in XPIDL (and the C++ implementation side is 
> worse as well).
>
> I agree that an XPConnect-alike supporting WebIDL semantics would be a lot of 
> work, but I also think that asking developers to implement chrome interfaces 
> with XPIDL is pretty lousy.

An alternative would be to evolve XPIDL to be more WebIDL like. I
suspect we could fix some of the ergonomic warts incrementally with
significantly less work than supporting the full WebIDL semantics in a
XPConnect style.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we focus more on color management support?

2018-03-26 Thread Jeff Muizelaar
Unfortunately it hasn't been a priority. Hopefully we'll get to it eventually.

-Jeff

On Fri, Mar 23, 2018 at 10:56 AM,   wrote:
> Chrome, Safari treat untagged images as sRGB, can read tagged ICCv4 images 
> and support video color management.
>
> Firefox does not have these features by default. Any ETA?
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Default Rust optimization level decreased from 2 to 1

2018-04-25 Thread Jeff Muizelaar
At minimum we should make --enable-profiling build with rust-opt.

-Jeff

On Wed, Apr 25, 2018 at 11:35 AM, Emilio Cobos Álvarez  wrote:
> There's a fair amount of people bitten by this constantly, which see long
> style profiling markers and what's really happening is that they're
> profiling a local opt build, and thus the Rust code in style has barely any
> optimization and is slow.
>
> I know that shouldn't be a thing, and that people should --enable-release
> for profiling and all that. But given it happens, could we consider
> reverting this change?
>
>  -- Emilio
>
>
> On 10/25/17 7:34 PM, Gregory Szorc wrote:
>>
>> Compiling Rust code with optimizations is significantly slower than
>> compiling without optimizations. As was measured in bug 1411081, the
>> difference between rustc's -Copt-level 1 and 2 on an i7-6700K (4+4 cores)
>> for a recent revision of mozilla-central was 325s/802s wall/CPU versus
>> 625s/1282s. This made Rust compilation during Firefox builds stand out as
>> a
>> long pole and significantly slowed down builds.
>>
>> Because we couldn't justify the benefits of level 2 for the build time
>> overhead it added, we've changed the build system default so Rust is
>> compiled with -Copt-level=1 (instead of 2).
>>
>> Adding --enable-release to your mozconfig (the configuration for builds we
>> ship to users) enables -Copt-level=2. (i.e. we didn't change optimization
>> settings for builds we ship to users.)
>>
>> Adding --disable-optimize sets to -Copt-level=0. (This behavior is
>> unchanged.)
>>
>> If you want explicit control over -Copt-level, you can `export
>> RUSTC_OPT_LEVEL=` in your mozconfig and that value will always be
>> used. --enable-release implies a number of other changes. So if you just
>> want to restore the old build system behavior, set this variable in your
>> mozconfig.
>>
>> Also, due to ongoing work around Rust integration in the build system, it
>> is dangerous to rely on manual `cargo` invocations to compile Rust because
>> bypassing the build system (not using `mach build`) may not use the same
>> set of RUSTFLAGS that direct `cargo` invocations do. Things were mostly in
>> sync before. But this change and anticipated future changes will cause
>> more
>> drift. If you want the correct behavior, use `mach`.
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Plan for Sunsetting MozReview

2018-07-27 Thread Jeff Muizelaar
Beware when using a WSL terminal with a Firefox source directory that
new directories created in WSL have case sensitive behaviour and this
causes cl.exe to get confused. This bit me last week.

-Jeff

On Fri, Jul 27, 2018 at 9:30 AM, Marco Bonardo  wrote:
> As a side note, the WSL terminal on Windows works properly with arc. The
> only downside is that you need a Windows terminal to build and a separate
> WSL terminal to arc diff...
>
> On Fri, Jul 27, 2018 at 3:25 PM Mark Côté  wrote:
>
>> I plan on updating a bunch of MDN docs within the next couple weeks. I
>> agree that the Windows installation can be confusing, and yes, I'd like to
>> package something. We're just trying to figure out the timeline for the
>> arc-less client, but it may well be worth packaging Arcanist regardless.
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox graphics issues on Windows 10 + Firefox 40

2015-08-13 Thread Jeff Muizelaar
AMD bug:
https://bugzilla.mozilla.org/show_bug.cgi?id=1189266
Nvidia bug:
https://bugzilla.mozilla.org/show_bug.cgi?id=1189940

-Jeff

On Thu, Aug 13, 2015 at 5:22 AM, Tom Schuster  wrote:
> Hey,
>
> people on reddit.com/r/firefox are reporting a fair amount of graphics
> related issues.
> It seems like most of it boils down to newly blacklisted drivers?
> Is there a bug for this somewhere?
>
> Thanks,
> Tom
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Alternative to Bonsai?

2015-09-16 Thread Jeff Muizelaar
Blame does work on those files locally. FWIW, fugitive vim's Gblame
command has the ability to jump back to the blame of parent revision
of the current line which makes it much easier to navigate history
than any web based blame tool that I've seen. Even if you only use vim
for GBlame I'd say it's still worth using.

-Jeff

On Wed, Sep 16, 2015 at 2:13 PM, Boris Zbarsky  wrote:
> On 9/16/15 2:01 PM, Ehsan Akhgari wrote:
>>
>> Out of curiosity, which files are you mentioning here?
>
>
> Here are some lovely links that all produce "This blame took too long to
> generate.  Sorry about that." for me:
>
> https://github.com/mozilla/gecko-dev/blame/master/layout/base/nsCSSFrameConstructor.cpp
>
> https://github.com/mozilla/gecko-dev/blame/master/dom/base/nsDocument.cpp
>
> https://github.com/mozilla/gecko-dev/blame/master/dom/base/nsGlobalWindow.cpp
>
> I have not done an exhaustive search for such files in our tree, but just
> those three represent a significant fraction of my blame lookups...
>
>
> -Boris
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox Nightly keeps crashing on Windows 10

2015-09-21 Thread Jeff Muizelaar
Can you post some links from your about:crashes?

-Jeff

On Mon, Sep 21, 2015 at 7:31 PM, Dhon Buenaventura  wrote:
> Why does Firefox Nightly keep crashing randomly? I am currently using the
> latest build but I still experience random crashes.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Turning on WebGL2 on Nightly

2015-12-16 Thread Jeff Muizelaar
Jeff Gilbert is planing on landing patches very soon that will flip
the webgl.enable-prototype-webgl2 pref to true on Nightly. This change
will stay on Nightly for now.

There are lots of tests from the conformance suite that we don't pass
but we're looking to get more web developers to try out the new apis
so that we can focus our efforts on the important parts first.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Turning on WebGL2 on Nightly

2015-12-21 Thread Jeff Muizelaar
Bug 1232864

-Jeff

On Mon, Dec 21, 2015 at 7:59 AM, Sylvestre Ledru  wrote:
> Le 16/12/2015 23:44, Jeff Muizelaar a écrit :
>> Jeff Gilbert is planing on landing patches very soon that will flip
>> the webgl.enable-prototype-webgl2 pref to true on Nightly. This change
>> will stay on Nightly for now.
>>
> Is there a bug number for this?
>
> Thanks,
> Sylvestre
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of C++11 std::unique_ptr for the WOFF2 module

2016-02-01 Thread Jeff Muizelaar
Lee Salzman came up with a hacky solution to this problem for the Skia
update that he's working on. I haven't seen it yet, but apparently it
builds.

-Jeff

On Mon, Feb 1, 2016 at 4:29 AM, Frédéric Wang  wrote:
> Dear all,
>
> I'm trying to upgrade our local copy of OTS to version 5.0.0 [1]. OTS
> relies on the Brotli and WOFF2 libraries, whose source code we currently
> include in mozilla-cental.
>
> I tried updating the source code of WOFF2 to the latest upstream
> version. Unfortunately, try server builds fail on OSX and mobile devices
> because the C++11 class std::unique_ptr does not seem to be available.
> IIUC some bugzilla entries and older threads on this mailing list, at
> the moment only some of the C++11 features are usable in the mozilla
> build system. Does any of the build engineer know whether
> std::unique_ptr can be made easily available? Or should we just patch
> the WOFF2 library to use of std::vector (as was done in earlier version)?
>
> Thanks,
>
> --
> Frédéric Wang
>
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1227058
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Does SSE2 usage still need to be conditional?

2016-02-01 Thread Jeff Muizelaar
I don't think there are any compilers that support x64 without SSE2.
SSE2 registers are required for passing float parameters in both MS
and System V ABIs.

-Jeff

On Mon, Feb 1, 2016 at 6:00 PM, Xidorn Quan  wrote:
> On Tue, Feb 2, 2016 at 7:04 AM, Benjamin Smedberg  
> wrote:
>>
>>
>> On 1/29/2016 2:05 PM, Cameron Kaiser wrote:
>>>
>>> On 1/29/16 9:43 AM, Ashley Gullen wrote:

 FWIW, the Steam Hardware Survey says 99.99% of users have SSE2 (under
 "other settings"): http://store.steampowered.com/hwsurvey
>>>
>>>
>>> For that to be valid, one must assume that the population of Firefox users
>>> and Steam users are sufficiently similar. I don't think that's necessarily
>>> true since most Steam titles have substantially higher system requirements.
>>
>> The last time we broke this (by accident) was several years ago. At the
>> time, we got vigorous complaining from various people who had relatively
>> recent bare-bones machines without SSE2.
>>
>> It might be worth reconsidering now: I'm not willing to throw away 0.5% of
>> our users without good cause, but perhaps there is a good cause to be made
>> here? What would the performance gain be for the remaining 99.5% of users,
>> realizing that we already have dynamic SSE2/non-SSE switching in place for
>> some of our hottest paths.
>
> The main question here I think is, whether we've enabled SSE2 for 64bit build
>
> It seems to me if we do, whether enabling SSE2 on x86 doesn't really
> matter unless we have a good reason. Fewer and fewer people would
> stick on x86, especially who cares about performance.
>
> If we haven't yet done that, we should. It seems to me the majority
> processors which supports x64 also supports SSE2. If there are really
> some people who use a processor doesn't support SSE2 but are using
> 64bit Firefox, they could simply back to use the 32bit version.
>
> - Xidorn
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: APNG and Accept-Encoding

2016-02-18 Thread Jeff Muizelaar
Is there a response to the criticism of Accept outlined here:
https://wiki.whatwg.org/wiki/Why_not_conneg#Negotiating_by_format

-Jeff

On Wed, Feb 17, 2016 at 6:08 PM, Mike Lawther  wrote:
> Hi Mozilla developers!
>
> tl,dr; can Firefox send an Accept-Encoding heading for APNG?
>
> I'm an engineer at Google working on Chrome. We're considering support for
> APNG.
>
> To support APNG, we think it's important for web developers (including for
> example CDN operators) to be able to decide server-side what content to
> ship. We want to send an Accept-Encoding header. This would be for whatever
> MIME type APNG ends up with, but that's another topic. The latest I've seen
> on this is "vnd.mozilla.apng" (https://bugzilla.mozilla.org/
> show_bug.cgi?id=1160200).
>
> If Chrome does decide to support APNG, it would be ideal for both our
> browsers to be compatible in this respect as well.
>
> Is this something we can coordinate on?
>
> thanks,
>
> Mike Lawther
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ Core Guidelines

2016-03-24 Thread Jeff Muizelaar
On Wed, Jan 6, 2016 at 7:15 AM, Henri Sivonen  wrote:
> On Thu, Oct 1, 2015 at 9:58 PM, Jonathan Watt  wrote:
>> For those who are interested in this, there's a bug to consider integrating
>> the Guidelines Support Library (GSL) into the tree:
>>
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1208262
>
> This bug appears to have stalled.
>
> What should my expectations be regarding getting an equivalent of (at
> least single-dimensional) GSL span (formerly array_view;
> conceptually Rust's slice) into MFBT?

Something like this already exits: mfbt/Range.h

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to enable e10s by default when running tests locally

2016-03-24 Thread Jeff Muizelaar
We fork a process to test gfx early on so 'set follow-for-mode child'
might end up following that.
'set detach-on-fork off' will keep you attached to everything though.

-Jeff

On Thu, Mar 24, 2016 at 1:21 PM, Paul Adenot  wrote:
> Do we know whether `set follow-fork-mode child` in gdb would work ? If
> not, can we fix it ? It would be a pretty good experience for most
> developers that only care about the child.
>
> Paul.
>
> On Thu, Mar 24, 2016, at 06:05 PM, Aaron Klotz wrote:
>> I know that most people aren't debugging e10s on Windows, but if you
>> are, here's a protip (provided that you are using WinDbg):
>>
>> If you include the "-o" option in the debugger args, WinDbg will
>> automatically attach itself to all child processes that are started by
>> the chrome process. No special environment variables or process startup
>> sleeps required.
>>
>> -Aaron
>>
>> On 3/24/2016 10:51 AM, Boris Zbarsky wrote:
>> > On 3/24/16 12:43 PM, Andrew Halberstadt wrote:
>> >> I'm not aware of work around this. If --debugger is completely busted
>> >> with e10s, I could potentially make --debugger imply --disable-e10s
>> >> until it gets fixed. Is there a bug on file?
>> >
>> > I don't know of one.
>> >
>> > It's not that it's busted per se, it's that it attaches the debugger
>> > to the parent process.  Which is not very helpful, in most cases.
>> > What would be ideal (at least on unixy systems) is if we managed to
>> > detect the child process starting and popped up an instance of $TERM
>> > or so, with a debugger running in it and attached to the child.
>> >
>> >> I also forgot to mention that command defaults are likely coming soon,
>> >> so once bug 1255450 lands you'll be able to make a .machrc with:
>> >>
>> >> [defaults]
>> >> mochitest = --disable-e10s
>> >
>> > Ah, nice.  Still, I agree it would be good to be testing e10s if we
>> > can make the debugging experience there better.
>> >
>> > -Boris
>> > ___
>> > dev-platform mailing list
>> > dev-platform@lists.mozilla.org
>> > https://lists.mozilla.org/listinfo/dev-platform
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dump frame tree in real time

2016-04-08 Thread Jeff Muizelaar
Check out 
https://developer.mozilla.org/en-US/docs/Mozilla/Debugging/Layout_Debugger.
I expect it gets the information that you're looking for.

-Jeff

On Fri, Apr 8, 2016 at 1:38 PM, Jip de Beer  wrote:
> Hi all,
>
> I would like to inspect the Frame Tree (or Render Tree: 
> http://www.html5rocks.com/en/tutorials/internals/howbrowserswork/#Render_tree_construction)
>  in real time while browsing with Firefox.
>
> I first tried to access this tree with JavaScript or a browser addon. It 
> seems that this information is not accessible at this level. With 
> document.elementFromPoint() or document.elementsFromPoint() in Chrome it may 
> be possible to re-construct the Frame Tree but this is not optimal. Another 
> way I tried to approach this is the re-implement the CSS stacking mechanism 
> in JavaScript (https://github.com/Jip-Hop/visuallyOnTop). But this is slow, 
> redundant and needs to be maintained as behavior changes.
>
> I looked around in about:config. The closest thing I found was enabling 
> layout.display-list.dump. With Firefox launched from the Terminal, the 
> display-list is dumped in real time. It's almost what I need except that this 
> is only output for elements visible in the viewport. I need to inspect the 
> entire Frame Tree.
>
> The next thing I tried was installing FirefoxDeveloperEdition, hoping this 
> version would somehow have extra tools to access this data. But it seems this 
> version of Firefox is oriented towards debugging websites. Dumping the Frame 
> Tree is probably possible when debugging the browser itself (Debugging Gecko 
> Core).
>
> I read here: 
> https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Debugging_a_Performance_Problem#Layout_Debugging
>  that I could enable this functionality by defining DEBUG_FRAME_DUMP in 
> layout/generic/nsFrameList.h. So I downloaded the Firefox source (nightly), 
> built Firefox Nightly and tried to dump the Frame Tree.
>
> I didn't manage to dump the Frame Tree using lldb... I followed these guides:
> http://mcc.id.au/blog/2014/01/lldb-gecko
> https://developer.mozilla.org/en-US/docs/Mozilla/Debugging/Debugging_on_Mac_OS_X
> https://developer.mozilla.org/en-US/docs/Mozilla/Debugging/Debugging_Mozilla_with_lldb
> I tried with Firefox, FirefoxDeveloperEdition and the nightly build (ran lldb 
> from Terminal as well as Xcode).
> I was able to attach lldb to the browser, but not output a Frame Tree dump.
> The code for the nightly build was unmodified. When I tried to define 
> DEBUG_FRAME_DUMP in layout/generic/nsFrameList.h (by uncommenting a block of 
> code) the build failed.
> I noticed when debugging with lldb, the browser hangs until I quit lldb. I 
> want to dump and inspect the Frame Tree in real time. So locking the browser 
> with a debugger like lldb may not be the way to go after all.
>
> So what are the possibilities for dumping the Frame Tree in real time? It 
> would be awesome if it could be done from JavaScript in the browser (without 
> being super slow), but logging to a file or outputting to the Terminal in 
> real time are also acceptable solutions. Also it would be great if it's 
> available from the default Firefox or FirefoxDeveloperEdition apps, as it 
> feels like a lot of trouble to make custom builds and edits to enable this 
> functionality.
>
> I'm using the latest versions of Firefox on Mac OS X 10.10.
>
> Looking forward to your replies.
>
> Thanks in advance.
> Jip
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ICU proposing to drop support for WinXP (and OS X 10.6)

2016-04-28 Thread Jeff Muizelaar
Do we use any of the OS specific parts of ICU?

-Jeff

On Thu, Apr 28, 2016 at 1:00 PM, Jonathan Kew  wrote:

> We make considerable (and growing) use of ICU for various aspects of i18n
> support in Gecko.†
>
> The ICU project is proposing to drop support for Windows XP and OS X 10.6
> in version 58; I guess this will be released sometime shortly after Unicode
> 9.0, which is due to appear in June.
>
> Markus (in the message forwarded below) mentions October 2016; I assume
> that's when they expect to end support for ICU 57.
>
> So we need to decide how we're going to respond to this. Some options for
> consideration:
>
> (a) Adopt ICU 58 when released, and drop Gecko support for WinXP and OSX
> 10.6.
>
> (b) Keep Gecko on ICU 57 and Unicode 8.0 until  when? AFAIK, we have
> not made any firm decisions regarding EOL for Firefox on these platforms.
>
> (c) Keep Gecko on ICU 57 code, but update its data files to support
> Unicode 9.0. This would take some effort on our side, though _probably_ not
> very much.
>
> (d) Push back against the ICU proposal to drop these platforms, and see if
> we can convince them to delay it. (No guarantees, though at least they're
> asking. If we had a specific end date to propose, I'd guess that might help
> our case.)
>
> In the case of either (b) or (c), we'd also need to take responsibility
> for handling any critical security issues that are discovered that affect
> the no-longer-maintained version we'd be shipping (e.g. by backporting
> fixes from the latest upstream version).
>
>
> Thoughts?
>
> JK
>
>
> † Except on Android, where we maintain separate code to support some
> features; others are simply missing.
>
>
>  Forwarded Message 
> Subject:Re: [icu-design] [icu-support] Drop Windows XP and OSX 10.6
> support
> Date:   Thu, 28 Apr 2016 08:55:55 -0700
> From:   Markus Scherer 
> To: icu-design 
> CC: ICU support mailing list ,
> Jonathan Kew 
>
>
>
> On Wed, Apr 27, 2016 at 4:30 PM, Steven Loomis  > wrote:
>
> Jonathan and other users,
>   Please comment on whether dropping Windows XP for ICU 58 will
> cause significant problems.
>   We discussed this for 57 (as per below) but no code changes were
> made.
>
>
> For ICU 57, we were just thinking of removing some Windows XP-specific
>   synchronization code. We decided to just keep that for 57.
>
> For ICU 58, we are looking at switching more code over to Windows
> Vista/7/8 APIs because Windows XP and Windows Server 2003 only support
> i18n APIs with LCID parameters and cannot support some languages at all.
> Newer Windows versions added APIs that take language tag strings. This
> is important for an i18n library on a major platform...
>
> For how long do you plan to support Windows XP past October 2016? Could
> you stay on ICU 57/CLDR 29/Unicode 8 until you stop supporting Windows XP?
>
> Also, Windows Vista seems to have very low market share and seems to be
> getting dropped by vendors around the same time they drop XP.
> Is it ok to skip Vista and set Windows 7 as the new base for ICU 58?
>
> Best regards,
> markus
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ICU proposing to drop support for WinXP (and OS X 10.6)

2016-04-28 Thread Jeff Muizelaar
On Thu, Apr 28, 2016 at 1:39 PM, Jonathan Kew  wrote:

> On 28/4/16 18:11, Jeff Muizelaar wrote:
>
>> Do we use any of the OS specific parts of ICU?
>>
>
> I don't know.
>
> But even if we don't, I suspect that once they drop support for XP / 10.6,
> it won't be long before the project as a whole becomes increasingly
> difficult to build for those targets, as it'll start assuming support for
> compiler and/or runtime library features that aren't readily available
> there.
>

True, but ICU project but more willing to compromise on things like that
compared to the OS functionality mentioned below.

-Jeff

On Thu, Apr 28, 2016 at 1:00 PM, Jonathan Kew > <mailto:jfkth...@gmail.com>> wrote:
>>
>> We make considerable (and growing) use of ICU for various aspects of
>> i18n support in Gecko.†
>>
>> The ICU project is proposing to drop support for Windows XP and OS X
>> 10.6 in version 58; I guess this will be released sometime shortly
>> after Unicode 9.0, which is due to appear in June.
>>
>> Markus (in the message forwarded below) mentions October 2016; I
>> assume that's when they expect to end support for ICU 57.
>>
>> So we need to decide how we're going to respond to this. Some
>> options for consideration:
>>
>> (a) Adopt ICU 58 when released, and drop Gecko support for WinXP and
>> OSX 10.6.
>>
>> (b) Keep Gecko on ICU 57 and Unicode 8.0 until  when? AFAIK, we
>> have not made any firm decisions regarding EOL for Firefox on these
>> platforms.
>>
>> (c) Keep Gecko on ICU 57 code, but update its data files to support
>> Unicode 9.0. This would take some effort on our side, though
>> _probably_ not very much.
>>
>> (d) Push back against the ICU proposal to drop these platforms, and
>> see if we can convince them to delay it. (No guarantees, though at
>> least they're asking. If we had a specific end date to propose, I'd
>> guess that might help our case.)
>>
>> In the case of either (b) or (c), we'd also need to take
>> responsibility for handling any critical security issues that are
>> discovered that affect the no-longer-maintained version we'd be
>> shipping (e.g. by backporting fixes from the latest upstream version).
>>
>>
>> Thoughts?
>>
>> JK
>>
>>
>> † Except on Android, where we maintain separate code to support some
>> features; others are simply missing.
>>
>>
>>  Forwarded Message 
>> Subject:Re: [icu-design] [icu-support] Drop Windows XP and
>> OSX 10.6
>> support
>> Date:   Thu, 28 Apr 2016 08:55:55 -0700
>> From:   Markus Scherer > <mailto:markus@gmail.com>>
>> To: icu-design > <mailto:icu-des...@lists.sourceforge.net>>
>> CC: ICU support mailing list > <mailto:icu-supp...@lists.sourceforge.net>>,
>> Jonathan Kew mailto:jonat...@jfkew.plus.com
>> >>
>>
>>
>>
>> On Wed, Apr 27, 2016 at 4:30 PM, Steven Loomis > <mailto:s...@icu-project.org>
>> <mailto:s...@icu-project.org <mailto:s...@icu-project.org>>> wrote:
>>
>>  Jonathan and other users,
>>Please comment on whether dropping Windows XP for ICU 58 will
>>  cause significant problems.
>>We discussed this for 57 (as per below) but no code changes
>> were made.
>>
>>
>> For ICU 57, we were just thinking of removing some Windows XP-specific
>>synchronization code. We decided to just keep that for 57.
>>
>> For ICU 58, we are looking at switching more code over to Windows
>> Vista/7/8 APIs because Windows XP and Windows Server 2003 only support
>> i18n APIs with LCID parameters and cannot support some languages at
>> all.
>> Newer Windows versions added APIs that take language tag strings. This
>> is important for an i18n library on a major platform...
>>
>> For how long do you plan to support Windows XP past October 2016?
>> Could
>> you stay on ICU 57/CLDR 29/Unicode 8 until you stop supporting
>> Windows XP?
>>
>> Also, Windows Vista seems to have very low market share and seems to
>> be
>> getting dropped by vendors around the same time they drop XP.
>> Is it ok to skip Vista and set Windows 7 as the new base for ICU 58?
>>
>> Best regards,
>> markus
>>
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org <mailto:dev-platform@lists.mozilla.org
>> >
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>>
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reverting to VS2013 on central and aurora

2016-05-11 Thread Jeff Muizelaar
Or mozglue/build/SSE.cpp

-Jeff

On Wed, May 11, 2016 at 9:35 AM, Ehsan Akhgari 
wrote:

> On 2016-05-10 10:01 PM, Robert Strong wrote:
> > On Tue, May 10, 2016 at 6:55 PM, Lawrence Mandel 
> > wrote:
> >
> >> On Fri, May 6, 2016 at 12:39 PM, Benjamin Smedberg <
> benja...@smedbergs.us>
> >> wrote:
> >>
> >>> I agree that we should drop support for non-SSE2. It mattered 7 years
> ago
> >>> (see https://bugzilla.mozilla.org/show_bug.cgi?id=500277) but it
> really
> >>> doesn't matter now.
> >>>
> >>> We do need to avoid updating these users to a build that will crash,
> and
> >> do
> >>> the same "unsupported" messaging we're doing for old versions of MacOS.
> >>> Gregory, will you own that? You will probably need to add CPU feature
> >>> detection to the update URL/params for 47, or use some kind of system
> >> addon
> >>> to shunt these users off the main update path.
> >>>
> >>
> >> Benjamin - We're likely going to want to do this again in the future.
> Not
> >> that Greg isn't capable but is there someone more familiar with the
> update
> >> system who can step in to get this done?
> >>
> > The majority of this involves getting whether the CPU supports SSE. The
> app
> > update part involves inserting the value into the url.
>
> In theory you should be able to lift that code from
> <
> https://dxr.mozilla.org/mozilla-central/source/js/src/jit/x86-shared/Assembler-x86-shared.cpp#219
> >.
>  Unfortunately I don't think this is exported from the JS engine, but
> should be possible to copy the CPU feature detection parts out of it.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Canvas CSS/SVG filters

2016-05-31 Thread Jeff Muizelaar
How does performance compare to Chrome?

-Jeff

On Thu, May 26, 2016 at 12:40 PM, Tobias Schneider 
wrote:

> I intend to turn Canvas CSS/SVG filters on by default on all platforms. It
> has been developed behind the canvas.filters.enabled preference. Google's
> Chrome is already shipping this in version 52.
>
> Related bugs:
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=927892
> https://bugzilla.mozilla.org/show_bug.cgi?id=1173545
>
> Specification:
>
> https://html.spec.whatwg.org/multipage/scripting.html#dom-context-2d-filter
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Canvas CSS/SVG filters

2016-06-01 Thread Jeff Muizelaar
Can you also get results on Windows?

-Jeff

On Wed, Jun 1, 2016 at 3:05 PM, Tobias Schneider 
wrote:

> I got the following numbers running
> https://dl.dropboxusercontent.com/u/55355076/benchmark.html?filters=true
> on my MacBook Pro (Mid 2014):
>
> Firefox Developer Edition: Skia-GL: 10fps
>   Skia: 3fps
>   CG: 10fps
>   Cairo:8fps
>
> Chrome Canary 53: 3fps
>
>
> On Tue, May 31, 2016 at 11:53 AM, Jeff Muizelaar 
> wrote:
>
>> How does performance compare to Chrome?
>>
>> -Jeff
>>
>> On Thu, May 26, 2016 at 12:40 PM, Tobias Schneider <
>> tschnei...@mozilla.com> wrote:
>>
>>> I intend to turn Canvas CSS/SVG filters on by default on all platforms.
>>> It
>>> has been developed behind the canvas.filters.enabled preference. Google's
>>> Chrome is already shipping this in version 52.
>>>
>>> Related bugs:
>>>
>>> https://bugzilla.mozilla.org/show_bug.cgi?id=927892
>>> https://bugzilla.mozilla.org/show_bug.cgi?id=1173545
>>>
>>> Specification:
>>>
>>>
>>> https://html.spec.whatwg.org/multipage/scripting.html#dom-context-2d-filter
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: DXR problem?

2016-07-05 Thread Jeff Muizelaar
Is this what you're looking for?
https://dxr.mozilla.org/mozilla-central/search?q=voice

-Jeff

On Sun, Jul 3, 2016 at 5:52 AM, Richard Z  wrote:

> Hi,
>
> tried dxr as replacement for lxr yesterday and today and it
> does not seem to work for me.
> Whatever I type into the searchbox the results is just an
> empty "This page was generated by DXR ."?
>
> https://dxr.mozilla.org/mozilla-central/search?q=voice&redirect=true
>
> Displaying source like
> https://dxr.mozilla.org/mozilla-central/source/mobile/android/modules/Prompt.jsm
> works but does not provide any xref links??
>
> Richard
>
> --
> Name and OpenPGP keys available from pgp key servers
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement: adding vector effects non-scaling-size, non-rotation and fixed-position to SVG

2016-12-29 Thread Jeff Muizelaar
I'm concerned about the complexity this will add to the SVG implementation
as we're looking to transition to WebRender.

Can the desired effects be achieved by interleaving HTML and SVG content
today? e.g. It seems like introductory notes example could just use a
separate SVG element that had fixed positioning instead of needing to build
fixed-position into SVG.

Do other browsers intend to implement these features?

-Jeff

On Sun, Dec 25, 2016 at 9:47 PM, Ramin  wrote:

> Intent to Implement: adding vector effects  non-scaling-size, non-rotation
> and fixed-position to SVG
>
> Contact emails
> 
> te-fuk...@kddi-tech.com, g-ra...@kddi-tech.com, sa-...@kddi-tech.com
>
> Summary
> 
> To offer vector effects regarding special coordinate transformations and
> graphic drawings as described in following Spec link,
> SVG Tiny 1.2 introduced the vector-effect property. Although SVG Tiny 1.2
> introduced only non-scaling stroke behavior, this version introduces a
> number of additional effects.
>
> We intend now to implement non-scaling-size, non-rotation and
> fixed-position, as well as their combination to Gecko/SVG.
>
> Motivation
> 
> It is a point of interest in many SVG content providers to let the outline
> of an object keep its original size or to keep the position of an object
> fix, regardless of type of transforms that are applied to it. For example,
> in a map with a 2px wide line representing roads it is of
> interest to keep the roads 2px wide even when the user zooms into the map,
> or introductory notes on the graphic chart in which panning is possible.
> Therefore, there is a high need for supporting these features in browsers.
>
>
> Spec(Link to standard)
> 
> https://svgwg.org/svg2-draft/coords.html#VectorEffects
>
> Platform coverage
> 
> Starting from Windows and Linux.
>
> Bug
> 
> https://bugzilla.mozilla.org/show_bug.cgi?id=1318208
>
> Estimated or target release
> 
> 2017/1/30
>
> Requesting approval to ship?
> 
> No.  Implementation is expected to take some time.
>
> Tests
> 
> Coming soon.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: improving access to telemetry data

2013-02-28 Thread Jeff Muizelaar

On 2013-02-28, at 10:44 AM, Benjamin Smedberg wrote:

> On 2/28/2013 10:33 AM, Benoit Jacob wrote:
>> Please, please make your plans include the ability to get raw text files
>> (CSV or JSON or something else, I don't care as long as I can easily parse
>> it).
> Could you be more specific? Note that while the text files currently provided 
> on crash-analysis, they are not the full dataset: they include a limited and 
> specific set of fields. It looks to me like telemetry payloads typically 
> include many more fields, and some of these are not single-value fields but 
> rather more complex histograms and such. Putting all of that into text files 
> may leave us with unworkably large text files.

I've also been using these text files for gathering cpu specific information:
https://github.com/jrmuizel/cpu-features

eg:

sse2 97.5126791126%
amd 30.9852560634%
coreavg 2.29447221529
coremax 32
mulicore 81.239118672%
windowsxp 34.260938801%
fourcore 19.3838329749%

('GenuineIntel', 6, 23) 16.8679157473 Core 2 Duo 45nm
('GenuineIntel', 6, 15) 11.4360128852 Core 2 Duo Allendale/Kentsfield 65nm
('GenuineIntel', 6, 42) 9.75864128134 Core i[735] Sandybridge
('AuthenticAMD', 20, 2) 7.62036395852 AMD64 C-60
('GenuineIntel', 6, 37) 6.61528670635 Core i[735] Westmere
('GenuineIntel', 15, 4) 6.41735517496 Pentium 4 Prescott 2M 90nm
('AuthenticAMD', 20, 1) 5.85243727729 AMD64 C-50
('AuthenticAMD', 16, 6) 4.60457148945 AMD64 Athlon II
('GenuineIntel', 15, 2) 3.34941643841 Pentium 4 Northwood 130nm
('GenuineIntel', 15, 6) 2.74496830572 Pentium D
('GenuineIntel', 6, 28) 2.56862420764 Atom
('AuthenticAMD', 15, 107) 1.78671055177 AMD64 X2
('GenuineIntel', 6, 22) 1.55990232387 Core based Celeron 65nm
('GenuineIntel', 6, 58) 1.47130974042 Core i[735] Ivybridge
('GenuineIntel', 6, 14) 1.18506598185 Core Duo 65nm
('GenuineIntel', 15, 3) 1.10796800574 Pentium 4 Prescott 90nm
('AuthenticAMD', 6, 8) 0.963864879489 Athlon (Palomino) XP/Duron
('GenuineIntel', 6, 13) 0.907232911584 Pentium M
('AuthenticAMD', 16, 5) 0.876674077418 Athlon II X4
('AuthenticAMD', 17, 3) 0.81163142121 
('AuthenticAMD', 18, 1) 0.7381780767 
('AuthenticAMD', 15, 44) 0.702012116998 
('AuthenticAMD', 16, 4) 0.670892570278 Athlon II X4
('AuthenticAMD', 16, 2) 0.621830221846 Athlon II X2
('GenuineIntel', 15, 1) 0.608373120562 Pentium 4 Willamette 180nm
('GenuineIntel', 6, 30) 0.578935711502 
('AuthenticAMD', 15, 127) 0.576692861288 
('AuthenticAMD', 6, 10) 0.545012602015 Athlon MP
('AuthenticAMD', 15, 104) 0.502959160501 
('AuthenticAMD', 15, 75) 0.486698496449 
('AuthenticAMD', 15, 47) 0.463148569202 
('AuthenticAMD', 15, 67) 0.423898690456 
('AuthenticAMD', 15, 95) 0.413805864493 
('GenuineIntel', 6, 54) 0.404834463636 
('AuthenticAMD', 15, 79) 0.327456131252 
('AuthenticAMD', 21, 16) 0.294654446871 
('GenuineIntel', 6, 26) 0.278954495373 
('GenuineIntel', 6, 8) 0.252881361634 Pentium III Coppermine 0.18 um
('AuthenticAMD', 21, 1) 0.234938559922 
('AuthenticAMD', 16, 10) 0.201576162988 
('AuthenticAMD', 15, 72) 0.163167353072 
('GenuineInte', 0, 0) 0.159242365198 
('AuthenticAMD', 6, 6) 0.150270964341 Athlon XP
('AuthenticAMD', 15, 76) 0.132608518906 
('GenuineIntel', 6, 9) 0.130926381245 
('AuthenticAMD', 15, 12) 0.105974672614 
('GenuineIntel', 6, 7) 0.102610397293 Pentium III Katmai 0.25 um
('GenuineIntel', 6, 44) 0.0986854094183 
('AuthenticAMD', 15, 124) 0.0964425592042 
('AuthenticAMD', 15, 4) 0.0939193527134 
('GenuineIntel', 6, 11) 0.0894336522853 Pentium III Tualatin 0.13 um
('CentaurHauls', 6, 13) 0.0832658141967 
('AuthenticAMD', 15, 28) 0.0827051016432 
('AuthenticAMD', 15, 36) 0.0695283566356 
('AuthenticAMD', 6, 7) 0.055230186521 Duron Morgan
('AuthenticAMD', 15, 43) 0.0521462674767 
('GenuineIntel', 6, 45) 0.0428945103437 
('AuthenticAM', 0, 0) 0.0420534415135 
('GenuineIntel', 15, 0) 0.0392498787459 
('AuthenticAMD', 15, 35) 0.0361659597016 
('AuthenticAMD', 6, 4) 0.0361659597016 Athlon
('AuthenticAMD', 15, 31) 0.0299981216129 
('AuthenticAMD', 6, 3) 0.0297177653362 Athlon Duron
('AuthenticAMD', 15, 39) 0.0260731337384 
('AuthenticAMD', 21, 2) 0.0179428017124 
('AuthenticAMD', 15, 15) 0.0176624454357 
('CentaurHauls', 6, 10) 0.0159803077751 
('GenuineIntel', 6, 10) 0.0117749636238 
('AuthenticAMD', 15, 63) 0.0117749636238 
('AuthenticAMD', 15, 55) 0.011494607347 
('CentaurHauls', 6, 15) 0.01037318224 
('GenuineIntel', 6, 6) 0.00953211340972 Pentium II Mendocino 0.25 um
('CentaurHauls', 6, 9) 0.00728926319567 
('GenuineIntel', 6, 5) 0.00672855064216 Pentium II Deschutes 0.25 um
('CentaurHauls', 6, 7) 0.00532676925837 VIA Ezra/Samuel 2
('AuthenticAMD', 15, 108) 0.00364463159783 
('AuthenticAMD', 15, 33) 0.00364463159783 
('GenuineInte', 6, 15) 0.00336427532108 
('AuthenticAMD', 15, 7) 0.00336427532108 
('GenuineIntel', 6, 53) 0.00308391904432 
('AuthenticAMD', 15, 8) 0.00308391904432 
('GenuineIntel', 6, 47) 0.00280356276757 
('AuthenticAMD', 16, 8) 0.00280356276757 
('GenuineIntel', 6, 46) 0.00224285021405 
('GenuineInte', 6, 23) 0.0022428502140

Re: WebP support

2013-04-08 Thread Jeff Muizelaar
No decision has been made yet. We are still evaluating the format.

-Jeff

On 2013-04-08, at 5:09 AM, David Bruant wrote:

> Hi,
> 
> (I'm not 100% sure this is the proper mailing list to ask this question, but 
> I can't think of a more relevant mailing-list at this time. Please forward if 
> inappropriate)
> 
> After a long period of reluctance, Mozilla is deciding to implement WebP 
> [1][2]. The only explanation I have been able to find was:
> "We decided to re-open this based on new data that shows that WebP has valid 
> use cases and advantages."
> Is there a longer explanation somewhere? If possible one where Mozilla 
> previous concerns are addressed.
> 
> Thanks,
> 
> David
> 
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=600919#c185
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=856375
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebP support

2013-04-08 Thread Jeff Muizelaar
Sure. Everything.me was seeing large gains when using lossy image compression 
with an alpha channel compared to png. This isn't a surprise but it's a use 
case that's not well supported by the current image formats we support.

-Jeff

On 2013-04-08, at 12:53 PM, Ralph Giles wrote:

> On 13-04-08 4:06 AM, Jeff Muizelaar wrote:
>> No decision has been made yet. We are still evaluating the format.
> 
> I think the concern is that none of that re-evaluation has been on a
> public list or bug I've seen. Can you clarify what Andreas meant by,
> "new data that shows that WebP has valid use cases and advantages" in
> https://bugzilla.mozilla.org/show_bug.cgi?id=600919#c185 ?
> 
> -r
> 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: CSS outline-color: invert

2013-04-09 Thread Jeff Muizelaar
Since IE supports this with hardware acceleration I don't think there's any 
theoretical reason we couldn't. That being said it is probably a lot of work 
and probably not worth doing right now.

-Jeff

On 2013-04-09, at 2:21 PM, Matt Brubeck wrote:

> Support for "outline-color: invert" was removed in Firefox 3, and a bug to 
> restore support was RESOLVED WONTFIX several years ago because it was deemed 
> infeasible while using hardware acceleration [1].
> 
> Since then, some developers have asked for reconsideration on the bug; since 
> some of their questions in the bug remain unanswered, one developer asked me 
> to bring it up here.
> 
> The specific question is whether the many changes to our graphics 
> acceleration since 2009 have made it any more feasible to implement "invert" 
> today.  Do IE or Opera use any tricks to do this that could be useful for us 
> too?
> 
> In addition to the obvious use case of making outlines visible against 
> multiple background colors, apparently this can be used for hacks [2] to 
> invert entire regions, similar to David Baron's SVG filter demo [3] but 
> possibly more flexible.
> 
> [1]: https://bugzilla.mozilla.org/show_bug.cgi?id=359497#c24
> [2]: http://lea.verou.me/2011/04/invert-a-whole-webpage-with-css-only/
> [3]: http://dbaron.org/log/20110430-invert-colors
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Virtual Memory fragmentation issues

2013-04-09 Thread Jeff Muizelaar

On 2013-04-08, at 7:46 PM, Benjamin Smedberg wrote:

> In stability-land we're starting to see some "interesting" problems relating 
> to virtual memory usage in Firefox on Windows.


> Either our code or the ATI driver is leaking mapped memory references in a 
> way that chews up VM space without actually using memory. We need to figure 
> out why and fix our work around the problem. I'm hoping that one of you on 
> the "To" of this email may be able to help figure out a way to either patch 
> Firefox or run an external debugging tool which can help record information 
> about the stacks when these mappings are typically created and destroyed, and 
> help create a detailed understanding of the problem so that we can either fix 
> our code or hand the problem to the nvidia developers with the most 
> reproducible/detailed report possible

One interesting phenomenon that I have seen is pages being mysteriously added 
to firefox's address space. I believe this can happen when graphics memory is 
paged into system memory and is added to our address space. I'm not sure what 
OS accounting all knows about this and what doesn't.

-Jeff

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Virtual Memory fragmentation issues

2013-04-09 Thread Jeff Muizelaar
Just allocating a bunch of Texture memory and looking at the process in VMMap 
should show the problem. Process Explorer also lets you view per process GPU 
memory consumption which might be interesting to look at.

-Jeff

On 2013-04-09, at 3:29 PM, Jet Villegas wrote:

> Jeff: how mysterious is this phenomena? Can we write a test that forces it to 
> happen? I'm curious how it behaves on ATI vs. other drivers.
> 
> --Jet
> 
> 
> - Original Message -----
> From: "Jeff Muizelaar" 
> To: "Benjamin Smedberg" 
> Cc: "Justin Lebar" , "Aaron Klotz" , 
> dev-platform@lists.mozilla.org, "Nicholas Nethercote" 
> , "Brian R. Bondy" 
> Sent: Tuesday, April 9, 2013 12:24:32 PM
> Subject: Re: Virtual Memory fragmentation issues
> 
> 
> On 2013-04-08, at 7:46 PM, Benjamin Smedberg wrote:
> 
>> In stability-land we're starting to see some "interesting" problems relating 
>> to virtual memory usage in Firefox on Windows.
> 
> 
>> Either our code or the ATI driver is leaking mapped memory references in a 
>> way that chews up VM space without actually using memory. We need to figure 
>> out why and fix our work around the problem. I'm hoping that one of you on 
>> the "To" of this email may be able to help figure out a way to either patch 
>> Firefox or run an external debugging tool which can help record information 
>> about the stacks when these mappings are typically created and destroyed, 
>> and help create a detailed understanding of the problem so that we can 
>> either fix our code or hand the problem to the nvidia developers with the 
>> most reproducible/detailed report possible
> 
> One interesting phenomenon that I have seen is pages being mysteriously added 
> to firefox's address space. I believe this can happen when graphics memory is 
> paged into system memory and is added to our address space. I'm not sure what 
> OS accounting all knows about this and what doesn't.
> 
> -Jeff
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking the amount of system JS we use in Gecko on B2G

2013-04-22 Thread Jeff Muizelaar

On 2013-04-22, at 2:15 PM, Bill McCloskey wrote:

> I can't agree with you more, Justin. I think Boris is right that we should 
> make these decisions on a case-by-case basis. But in the case of these 
> workers, it seems clear that converting them to C++ is the way to go, 
> assuming we have the resources to do so.

So a specific case that I ran into during the Performance Workshop is 
RILContentHelper.js. During the startup of the settings app
we jank for 53ms while initializing the RILContentHelper.js: 

http://people.mozilla.com/~bgirard/cleopatra/#report=bf7077c6552fe2bc015d7074a338b673911f3ce8&search=Mobile

There doesn't seem to be anything specific taking that much time in the 
profile, just general JS overhead. In this case RILContentHelper.js is wrapped 
by by C++ code in dom/network/src/MobileConnection.cpp and so we end up 
spending a fair amount of time transitioning from JS to C++ to JS to C++.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking the amount of system JS we use in Gecko on B2G

2013-04-22 Thread Jeff Muizelaar

On 2013-04-22, at 3:44 PM, Terrence Cole wrote:

> On 04/22/2013 12:12 PM, Jeff Muizelaar wrote:
>> On 2013-04-22, at 2:15 PM, Bill McCloskey wrote:
>> 
>>> I can't agree with you more, Justin. I think Boris is right that we should 
>>> make these decisions on a case-by-case basis. But in the case of these 
>>> workers, it seems clear that converting them to C++ is the way to go, 
>>> assuming we have the resources to do so.
>> So a specific case that I ran into during the Performance Workshop is 
>> RILContentHelper.js. During the startup of the settings app
>> we jank for 53ms while initializing the RILContentHelper.js: 
>> 
>> http://people.mozilla.com/~bgirard/cleopatra/#report=bf7077c6552fe2bc015d7074a338b673911f3ce8&search=Mobile
> 
> That link gives me this: "Error fetching profile :(. URL:
> 'http://profile-store.commondatastorage.googleapis.com/bf7077c6552fe2bc015d7074a338b673911f3ce8'.
> Did you set the CORS headers?"

That's weird. The link works for others and the CORS headers should be set.

> 
>> 
>> There doesn't seem to be anything specific taking that much time in the 
>> profile, just general JS overhead. In this case RILContentHelper.js is 
>> wrapped by by C++ code in dom/network/src/MobileConnection.cpp and so we end 
>> up spending a fair amount of time transitioning from JS to C++ to JS to C++.
> 
> That seems like the sort of thing that SpiderMonkey may be able to
> address in the short term, depending on what exactly it turns out to be.
> Is there a bug on file somewhere to coordinate the investigation?

I don't know if there's anything surprising here. Calling into JS from C++ goes 
through xpconnect which is a long standing slowness.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing support for OS/2

2013-08-01 Thread Jeff Muizelaar

On 2013-08-01, at 7:38 PM, Mike Hommey wrote:

> On Thu, Aug 01, 2013 at 04:13:23PM -0700, Gregory Szorc wrote:
>> We have a number of references to OS/2 throughout the build system
>> and source tree. According to Kyle Huey OS/2 has likely broken since
>> we removed --disable-ipc (bug 638755) in March 2011.
> 
> There have been OS/2-related changes landing way after that date, so I
> doubt it is actually broken. In fact, there's been an OS/2 specific
> landing a week ago (!).

I removed the NSPR TimeStamp implementation on May 4 2012. We've only been 
supporting POSIX, Windows and MacOSX since then.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: vsync proposal

2013-08-13 Thread Jeff Muizelaar

On 2013-08-12, at 11:05 PM, Robert O'Callahan wrote:

> Tell me what you think.
> https://wiki.mozilla.org/User:Roc/VsyncProposal

A couple things that are not clear to me from this proposal:
 - when the vsync event is sent?
 - how does it deal with a blocking swapbuffers()?
 - what happens in the case where we can't quite paint at vsync?

-Jeff

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should we build a new in-process unwind library?

2013-08-30 Thread Jeff Muizelaar

On 2013-08-30, at 4:58 AM, Julian Seward wrote:

> I am very tempted to create a new custom unwind library designed
> specifically to support SPS.  It needs to be fast, lower-footprint,
> and multithreaded.  Unlike Breakpad, it will -- at least initially --
> avoid supporting all Tier 1 targets, and concentrate on the most
> critical cases: ARM/FirefoxOS, ARM/Android (via EXIDX) and (as a side
> effect) {x86,x86_64}/Linux (via CFI).

Have you looked at libbacktrace? 
(https://github.com/mirrors/gcc/tree/master/libbacktrace)

Is it a more useful starting point?

-Jeff

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Introducing Brotli - an alternative to LZMA

2013-09-11 Thread Jeff Muizelaar

On 2013-09-11, at 5:55 AM, Mike Hommey wrote:

> On Wed, Sep 11, 2013 at 06:49:58AM +0100, Jonathan Kew wrote:
>> However, several concerns regarding LZMA (lack of formal
>> specification combined with complexity of the code, making careful
>> security review and maintenance difficult; relatively slow
>> decompression)
> 
> Another problem with LZMA is the amount of memory it requires for
> decompression.

Are you sure? http://www.7-zip.org/sdk.html claims "Small memory requirements 
for decompression: 8-32 KB + DictionarySize " This seems similar to what Flate 
requires.

Brotli increases the window size and thus memory requirement to 4MB which is 
quite a bit. It's also larger than the cache size on mobile devices which is 
currently around 1MB so it would be
interesting to see decompression speeds on small cache devices.

Brotli seems to have similar design to OodleLZHLW 
(http://www.radgametools.com/oodlecompressors.htm) it would of course be 
interesting to know how it competes.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Introducing Brotli - an alternative to LZMA

2013-09-11 Thread Jeff Muizelaar

On 2013-09-11, at 9:43 AM, Jonathan Kew wrote:

> On 11/9/13 14:12, Jeff Muizelaar wrote:
> 
>> Brotli increases the window size and thus memory requirement to 4MB
>> which is quite a bit. It's also larger than the cache size on mobile
>> devices which is currently around 1MB so it would be interesting to
>> see decompression speeds on small cache devices.
> 
> That's an interesting point, which we should certainly raise with the 
> developers.
> 
>> Brotli seems to have similar design to OodleLZHLW
>> (http://www.radgametools.com/oodlecompressors.htm) it would of course
>> be interesting to know how it competes.
> 
> It would, although that looks like a commercial product and thus not 
> something we could adopt as part of the open Web platform, even if it appears 
> to have particularly desirable performance

Certainly. I just mean that it's important to compare Brotli against 
compression schemes that are targeted at the same design space to see how well 
it compares against it's real competition. Another candidate for comparison is 
LZHAM which is basically LZMA using huffman coding instead of range coding.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Studying Lossy Image Compression Efficiency

2013-10-19 Thread Jeff Muizelaar


- Original Message -
> On Saturday, October 19, 2013 12:12:14 AM UTC+1, Ralph Giles wrote:
> > On 2013-10-18 1:57 AM, Yoav Weiss wrote:
> > Do you have such a sample?
> 
> For what it's worth here's an image I made quite awhile ago showing the
> results of my own blind subjective comparison between codecs:
> http://www.filedropper.com/lossy

I agree that in this comparison JPEG is clearly the worst. However, the bitrate 
that you are using here is well below the target for which JPEG is designed to 
be used and the quality of all of the image formats is lower than would be 
acceptable for nearly all purposes. This makes these results much less 
interesting than at quality levels typically used on the web.

-Jeff

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Subpixel AA text rendering on OSX

2014-01-21 Thread Jeff Muizelaar

On Jan 20, 2014, at 5:48 PM, Matt Woodrow  wrote:

> Hi,
> 
> Currently in gecko we have code to determine if text being drawn into a 
> transparent surface has opaque content underneath it. In the case where it 
> doesn't we ask moz2d/cairo [1] to disable subpixel AA text rendering for this 
> surface since it can have poor results depending on what the eventual 
> background color is.
> 
> However, this setting was only ever respected for the GDI font rendering 
> backend with cairo, and the other backends ignored it (unless they do 
> something internally?). Moz2D matched this until recently, until bug 941095 
> made the CoreGraphics backend start respecting the value as well.
> 
> The two extreme examples of where this might matter are as follows:
> 
> - A transparent surface with a very nearly opaque background color (the OSX 
> content menus uses rgba(255,255,255,0.95) iirc). In this case it's probably 
> ok to use subpixel AA since we're so close to opaque. The OSX context menu's 
> used to get subpixel AA text in this case since the value was ignored, but no 
> longer do (on nightly) since we respect the value.
> 
> - A fully transparent surface containing only text. It should be fairly easy 
> to get this to happen by 3d transforming a  element. I suspect we don't 
> want subpixel AA here, since we have no idea what the final background color 
> might be (and it could change every frame if the transform is animated). CG 
> was previously still giving us subpixel AA in this case, but won't be any 
> more.

I suspect Safari will drop the sub pixel AA in this case and we should to.

> I doubt we want to ship the text quality regression for OSX context menus, so 
> we need to find a solution to this. Preferably something that is consistent 
> across backends :)
> 
> Possible ideas:
> 
> - Revert the change to the Moz2D CG backend. This takes us back to matching 
> our previous behaviour, but not ideal to have Moz2D API functions that are 
> ignored on some backends.

I’d suggest we do this for now. OS X supports drawing subpixel-AA to partially 
transparent surfaces. I believe the amount of sub pixel AA depends on the alpha 
of the surface. (Imagine drawing sub-pixel aa text to a transparency group to 
get an idea of how this would work)

On OS X we really only want to be disabling subpixel-AA when were drawing to 
surface that currently or will be not pixel aligned.

-Jeff

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tagging legitimate main thread I/O

2014-02-07 Thread Jeff Muizelaar

On Feb 7, 2014, at 10:31 AM, David Rajchenbach-Teller  
wrote:

> When we encounter main thread I/O, most of the time, it is something
> that should be rooted out. However, in a few cases (e.g. early during
> startup, late during shutdown), these pieces of I/O should actually be
> left untouched.
> 
> Since main thread I/O keeps being added to the tree, for good or bad
> reasons, I believe that we should adopt a convention of tagging
> legitimate main thread I/O.
> 
> e.g. :
> - « Reading on the main thread as threads are not up yet ».
> - « Reading on the main thread as we may need to install XPCOM
> components required before profile-after-change. »
> - ...
> 
> Any thoughts?

I think this is a good idea.

Another example of main thread I/O that we don’t have a lot of control over is 
some of the font reading that happens during rasterization or other system 
api’s that we call.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to efficiently walk the DOM tree and its strings

2014-03-03 Thread Jeff Muizelaar

On Mar 3, 2014, at 2:28 PM, Felipe G  wrote:

> Hi everyone, I'm working on a feature to offer webpage translation in
> Firefox. Translation involves, quite unsurprisingly, a lot of DOM and
> strings manipulation. Since DOM access only happens in the main thread, it
> brings the question of how to do it properly without causing hank.

What does Chrome do?

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Studying Lossy Image Compression Efficiency

2014-03-07 Thread Jeff Muizelaar

On Feb 23, 2014, at 5:17 PM, evacc...@gmail.com wrote:

> On Monday, October 21, 2013 8:54:24 AM UTC-6, tric...@accusoft.com wrote:
>>> - I suppose that the final lossless step used for JPEGs was the usual 
>>> Huffman encoding and not arithmetic coding, have you considered testing the 
>>> later one independently?
>> 
>> 
>> 
>> Uninteresting since nobody uses it - except a couple of compression gurus, 
>> the AC coding option is pretty much unused in the field.
> 
> Nobody uses it because there's no browser support, but that doesn't change 
> the fact that it's overwhelmingly better.  And if you're going to compare 
> JPEG to a bunch of codecs with horrible support in the real world, it seems 
> pretty unfair to hold JPEG only to features that are broadly supported.  
> Also, last I looked, the FF team refused to add support for JPEGs with 
> arithmetic encoding, even though the patent issues have long since expired 
> and it's already supported by libjpeg.
> 
> IMO, it's silly not to let JPEG use optimal settings for a test like this, 
> because promulgating an entirely new standard (as opposed to improving an 
> existing one) is much more difficult.

Perhaps it’s easier. However, the point though was to see if new image formats 
were sufficiently better than what we have now to be worth adding support for, 
not to compare image formats to see which one is best. 

> I would also like to see the raw libjpeg settings used; were you using float? 
>  Were the files optimized?

These are easy questions for you to answer by reading the source yourself.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rendering meeting today, Monday 5:30pm PDT ("the later time")

2014-04-21 Thread Jeff Muizelaar
This meeting has been cancelled as there’s nothing substantial on the agenda.

-Jeff

On Apr 21, 2014, at 10:35 AM, Milan Sreckovic  wrote:

> (Sorry for the late notice, I should have sent this out before the weekend, 
> but the holiday meant I missed the reminder.)
> 
> 
> The Rendering meeting is about all things Gfx, Image, Layout, and Media.
> It takes place every second Monday, alternating between 2:30pm PDT and 5:30pm 
> PDT.
> 
> The next meeting will take place today, Monday, April 21 at 5:30 PM US/Pacific
> Please add to the agenda: https://wiki.mozilla.org/Platform/GFX/2014-April-21
> 
> San Francisco - Monday, 5:30pm
> Winnipeg - Monday, 7:30pm
> Toronto - Monday, 8:30pm
> GMT/UTC - Tuesday, 1:30
> Paris - Tuesday, 2:30am
> Taipei - Tuesday, 8:30am
> Brisbane - Tuesday, 10:30am
> Auckland - Tuesday, 12:30pm
> 
> http://arewemeetingyet.com/Toronto/2014-04-21/20:30/Rendering%20Meeting
> 
> Video conferencing:
> Vidyo room Graphics (9366)
> https://v.mozilla.com/flex.html?roomdirect.html&key=FILzKzPcA6W2
> 
> Phone conferencing:
> +1 650 903 0800 x92 Conf# 99366
> +1 416 848 3114 x92 Conf# 99366
> +1 800 707 2533 (pin 369) Conf# 99366
> 
> --
> - Milan
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using protobuf in m-c

2014-09-29 Thread Jeff Muizelaar

On Sep 24, 2014, at 1:38 PM, Fitzgerald, Nick  wrote:

> Hey folks,
> 
> We already have the protobuf library in the tree, and it seems to be used for 
> layer scope and webrtc.
> 
> I'd like to use it for serializing heap snapshots in devtools code, but I 
> have a couple questions:

I’m not sure of the exact use case but wouldn’t something like MessagePack or 
BSON be more appropriate for serializing arbitrary data from a JS heap?

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Compiler version expectations

2014-10-16 Thread Jeff Muizelaar
After some discussion some IRC it was clear that our compiler deprecation 
schedule is not very clear.

Now that we’re using VS2013 on trunk and will soon not being using GCC 4.4 for 
B2G, I expect we’ll be dropping support for building with VS2010 and GCC 4.4  
in the near term. 

This is important to us because Skia is planing on using more C++11 features in 
the near term and we’d like to continue updating from upstream. Are there 
reasons we can’t drop support for these compilers in the 37-38 time frame?

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Compiler version expectations

2014-10-16 Thread Jeff Muizelaar

On Oct 16, 2014, at 3:57 PM, Ehsan Akhgari  wrote:

> On 2014-10-16, 3:49 PM, Jeff Muizelaar wrote:
>> After some discussion some IRC it was clear that our compiler deprecation 
>> schedule is not very clear.
>> 
>> Now that we’re using VS2013 on trunk and will soon not being using GCC 4.4 
>> for B2G, I expect we’ll be dropping support for building with VS2010 and GCC 
>> 4.4  in the near term.
> 
> GCC is https://bugzilla.mozilla.org/show_bug.cgi?id=1077549.  No specific bug 
> or plans for MSVC2010, but I'd be open to killing support for it on the next 
> release train.
> 
>> This is important to us because Skia is planing on using more C++11 features 
>> in the near term and we’d like to continue updating from upstream. Are there 
>> reasons we can’t drop support for these compilers in the 37-38 time frame?
> 
> What C++11 features specifically?

This set: http://chromium-cpp.appspot.com/

-Jeff

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Compiler version expectations

2014-10-16 Thread Jeff Muizelaar
Type aliasing requires 2013, but we can probably keep them from using that for 
now. I don’t think asking them to support VS2012 will be too much of a burden.

-Jeff

On Oct 16, 2014, at 4:29 PM, David Major  wrote:

> I was thinking it would be nice to support VS2010 as long as any of our main 
> channels use it -- meaning we could drop it on the first day of 39. But I 
> have no practical justification for that. If it causes a burden on Skia work 
> then it might be reasonable to switch sooner.
> 
>> This set: http://chromium-cpp.appspot.com/
> What MS compiler does that list require? There's a number of people building 
> with VS2012, would that still be supported?
> 
> David
> 
> - Original Message -
>> From: "Jeff Muizelaar" 
>> To: "Ehsan Akhgari" 
>> Cc: "dev-platform@lists.mozilla.org list" 
>> Sent: Friday, October 17, 2014 9:14:19 AM
>> Subject: Re: Compiler version expectations
>> 
>> 
>> On Oct 16, 2014, at 3:57 PM, Ehsan Akhgari  wrote:
>> 
>>> On 2014-10-16, 3:49 PM, Jeff Muizelaar wrote:
>>>> After some discussion some IRC it was clear that our compiler deprecation
>>>> schedule is not very clear.
>>>> 
>>>> Now that we’re using VS2013 on trunk and will soon not being using GCC 4.4
>>>> for B2G, I expect we’ll be dropping support for building with VS2010 and
>>>> GCC 4.4  in the near term.
>>> 
>>> GCC is https://bugzilla.mozilla.org/show_bug.cgi?id=1077549.  No specific
>>> bug or plans for MSVC2010, but I'd be open to killing support for it on
>>> the next release train.
>>> 
>>>> This is important to us because Skia is planing on using more C++11
>>>> features in the near term and we’d like to continue updating from
>>>> upstream. Are there reasons we can’t drop support for these compilers in
>>>> the 37-38 time frame?
>>> 
>>> What C++11 features specifically?
>> 
>> This set: http://chromium-cpp.appspot.com/
>> 
>> -Jeff
>> 
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>> 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Compiler version expectations

2014-10-20 Thread Jeff Muizelaar
I think that’s manageable.

-Jeff

On Oct 20, 2014, at 2:54 PM, Ehsan Akhgari  wrote:

> So I just spoke with Callek about his plans to move SeaMonkey off of their 
> existing Windows 2003 builders that cannot install Visual Studio 2012 or 
> newer, and it seems like Dec 15th is a date that will probably work fine for 
> SM, and that is still within the Gecko 37 cycle.  Jeff, can we hold off the 
> Skia update plans until that date?
> 
> Thanks!
> 
> On Thu, Oct 16, 2014 at 5:35 PM, Ehsan Akhgari  
> wrote:
> Can you please ask them to not use variadic templates too?  That also seems 
> to require MSVC 2013.
> 
> 
> On 2014-10-16, 4:33 PM, Jeff Muizelaar wrote:
> Type aliasing requires 2013, but we can probably keep them from using that 
> for now. I don’t think asking them to support VS2012 will be too much of a 
> burden.
> 
> -Jeff
> 
> On Oct 16, 2014, at 4:29 PM, David Major  wrote:
> 
> I was thinking it would be nice to support VS2010 as long as any of our main 
> channels use it -- meaning we could drop it on the first day of 39. But I 
> have no practical justification for that. If it causes a burden on Skia work 
> then it might be reasonable to switch sooner.
> 
> This set: http://chromium-cpp.appspot.com/
> What MS compiler does that list require? There's a number of people building 
> with VS2012, would that still be supported?
> 
> David
> 
> - Original Message -
> From: "Jeff Muizelaar" 
> To: "Ehsan Akhgari" 
> Cc: "dev-platform@lists.mozilla.org list" 
> Sent: Friday, October 17, 2014 9:14:19 AM
> Subject: Re: Compiler version expectations
> 
> 
> On Oct 16, 2014, at 3:57 PM, Ehsan Akhgari  wrote:
> 
> On 2014-10-16, 3:49 PM, Jeff Muizelaar wrote:
> After some discussion some IRC it was clear that our compiler deprecation
> schedule is not very clear.
> 
> Now that we’re using VS2013 on trunk and will soon not being using GCC 4.4
> for B2G, I expect we’ll be dropping support for building with VS2010 and
> GCC 4.4  in the near term.
> 
> GCC is https://bugzilla.mozilla.org/show_bug.cgi?id=1077549.  No specific
> bug or plans for MSVC2010, but I'd be open to killing support for it on
> the next release train.
> 
> This is important to us because Skia is planing on using more C++11
> features in the near term and we’d like to continue updating from
> upstream. Are there reasons we can’t drop support for these compilers in
> the 37-38 time frame?
> 
> What C++11 features specifically?
> 
> This set: http://chromium-cpp.appspot.com/
> 
> -Jeff
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
> 
> 
> 
> 
> 
> -- 
> Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Serious performance regression when setting the background of window to be transparent under win32.

2014-12-17 Thread Jeff Muizelaar
Can you get profiles of the two cases?

https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler

It's also probably worth filing a bug about this.

-Jeff

On Tue, Dec 16, 2014 at 4:08 PM, 罗勇刚(Yonggang Luo) 
wrote:
>
> The result for Win7 X64, when the background of the window is transparent.
> CanvasMark Score: 4538 (Mozilla 31 on Windows)
>
> Without transparent background:
>
> CanvasMark Score: 5366 (Mozilla 31 on Windows)
>
> Tweet this result.
>
> 2014-12-16 23:30 GMT+08:00 Jeff Muizelaar :
> > Or rather than what platform what does the graphic section of
> about:support
> > say?
> >
> > On Tue, Dec 16, 2014 at 10:28 AM, Jeff Muizelaar  >
> > wrote:
> >>
> >> How much slower is it? And on what platform do you see this?
> >>
> >> -Jeff
> >>
> >> On Tue, Dec 16, 2014 at 8:15 AM, Yonggang Luo 
> >> wrote:
> >>>
> >>> When I setting the background of the window to be transparent,
> >>> And creating a tree with about 10 rows, and when
> >>> I scrolling the tree by dragging the thumb button, the
> >>> dragging speed is much more slow.
> >>> ___
> >>> dev-platform mailing list
> >>> dev-platform@lists.mozilla.org
> >>> https://lists.mozilla.org/listinfo/dev-platform
>
>
>
> --
>  此致
> 礼
> 罗勇刚
> Yours
> sincerely,
> Yonggang Luo
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2014-12-22 Thread Jeff Muizelaar
We were talking about this problem and it was a bunch of work to figure out the 
conclusion so I decided to write a summary:

Replacing already_AddRefed with nsRefPtr causes allows two new things:

nsRefPtr getT();

1. T* p = getT(); // this is unsafe because the destructor runs immediately and 
p is left dangling
2. foo(getT()); // this is safe

Possible solutions would be to:
 - remove implicit conversions to T*
 - add operator T* explicit and operator T* && = delete // this will be 
available in GCC 4.8.1 and MSVC 2014 Nov CTP

-Jeff
 
On Aug 14, 2014, at 10:21 AM, Ehsan Akhgari  wrote:

> On Thu, Aug 14, 2014 at 10:02 AM, Aryeh Gregor  wrote:
> 
>> On Thu, Aug 14, 2014 at 12:00 PM, Neil  wrote:
>>> Well there's your problem: GetWSBoundingParent doesn't need to own the
>> nodes
>>> it works on.
>> 
>> Editing code is ancient, poorly maintained, and not
>> performance-sensitive, so personally, I don't ever use raw pointers as
>> local variables in editor/.  Better to not have to rely on manual
>> review to prevent use-after-free bugs.  I am aware of at least one
>> sec-critical bug in editor that resulted from non-use of nsCOMPtr that
>> slipped through review when someone was rewriting editing code.
>> 
>> In this case, I seem to remember I wanted to change it to return a raw
>> pointer instead of already_AddRefed, but IIRC, Ehsan said not to
>> (although I can't find that anywhere, so maybe I made it up).
>> 
> 
> I don't remember either!
> 
> -- 
> Ehsan
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enhancing Gecko as a WebGL game platform

2015-01-13 Thread Jeff Muizelaar
On Tue, Jan 13, 2015 at 10:56 AM, Mike de Boer  wrote:

>
> 2. Optionally bypass the browser compositor when a WebGL context is in
> fullscreen mode. In this mode, WebGL draw calls would write to the  OS back
> buffer directly, increasing performance. Of course, this would never be
> possible if the WebGL context has to be rendered amidst other HTML elements
> on a web page, so that’s why the proposition here is for fullscreen mode
> only.
>

There was a thread on this on the public webgl mailing list recently:
https://www.khronos.org/webgl/public-mailing-list/archives/1412/msg00062.html

Interestingly enough, I believe Safari and IE already avoid the problem
this would solve because they use the system compositor to composite there
layers instead of a built-in one.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enhancing Gecko as a WebGL game platform

2015-01-14 Thread Jeff Muizelaar
On Wed, Jan 14, 2015 at 4:29 AM, Mike de Boer  wrote:

>
> On 13 Jan 2015, at 21:52, Jeff Muizelaar  wrote:
>
>
>
> On Tue, Jan 13, 2015 at 10:56 AM, Mike de Boer 
> wrote:
>
>>
>> 2. Optionally bypass the browser compositor when a WebGL context is in
>> fullscreen mode. In this mode, WebGL draw calls would write to the  OS back
>> buffer directly, increasing performance. Of course, this would never be
>> possible if the WebGL context has to be rendered amidst other HTML elements
>> on a web page, so that’s why the proposition here is for fullscreen mode
>> only.
>>
>
> There was a thread on this on the public webgl mailing list recently:
>
> https://www.khronos.org/webgl/public-mailing-list/archives/1412/msg00062.html
>
>
> This is exactly what I’m talking about, Robert, and it’d be used primarily
> to reduce latency and improve frame rates for fullscreen games.
>
>
> Interestingly enough, I believe Safari and IE already avoid the problem
> this would solve because they use the system compositor to composite there
> layers instead of a built-in one.
>
>
> So how do they deal with compositing adjecent HTML into a single texture?
> Regardless, I'd believe it right away that these browser targeted to a
> single platform can optimise their rendering pipeline.
> I did hear that Blink (perhaps Webkit too?) _does_ have this problem as
> well.
>

That content is just in a different layer and is composited on top by the
system compositor.

Safari uses CoreAnimation as compositor. CoreAnimation supports doing the
composition in the WindowServer process directly instead of into an
application backbuffer. DirectComposition gives the same benefit on Windows
and we actually have a similar thing on FirefoxOS.

Firefox and Chrome both composite everything into an application backbuffer
that is composited by the window manager and thus have an additional copy.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: MouseEvent.offsetX/Y

2015-02-27 Thread Jeff Muizelaar
On Fri, Feb 27, 2015 at 2:21 PM, Robert O'Callahan  wrote:
> Oh, another issue is that I've followed the spec and made offsetX/Y
> doubles, whereas Blink is integers, which introduces a small amount compat
> risk.
>

IE also uses integers. Wouldn't it be better to change the spec to
follow the existing browser's behaviour?

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


dev-platform@lists.mozilla.org

2015-03-02 Thread Jeff Muizelaar
It looks like the current one should already be as the the AssignASCII
will be inlined into the caller and then the strlen can be inlined as
well.

-Jeff

On Sun, Mar 1, 2015 at 7:04 PM, smaug  wrote:
> On 03/02/2015 01:11 AM, Xidorn Quan wrote:
>>
>> On Mon, Mar 2, 2015 at 9:50 AM, Boris Zbarsky  wrote:
>>
>>> On 3/1/15 5:04 PM, Xidorn Quan wrote:
>>>
 Hence I think we should remove this method. All callees should use
 either
 AssignLiteral(MOZ_UTF16("some string")), or, if don't want to bloat the
 binary, explicitly use AssignASCII("some string").

>>>
>>> The latter requires an strlen() that AssignLiteral optimizes out, right?
>>>
>>
>> Yes, so we can add another overload to AssignASCII which does this
>> optimization
>
> How would you do that?
>
>> with less misleading.
>>
>> - Xidorn
>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is MOZ_SHARK still used?

2015-04-02 Thread Jeff Muizelaar
I don't think Shark runs on any modern macs.

-Jeff

On Thu, Apr 2, 2015 at 4:22 PM, Robert Strong  wrote:
> I filed Bug 1150312 to remove it if it is no longer used so please speak up
> if it is.
> https://bugzilla.mozilla.org/show_bug.cgi?id=1150312
>
> Thanks,
> Robert
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: fx-team repository stripped

2015-04-22 Thread Jeff Muizelaar
Should we have a hook to catch this kind of thing?

-Jeff

On Wed, Apr 22, 2015 at 1:44 PM, Gregory Szorc  wrote:
> 2 files summing to 90 MB of binary data (a Firefox installer) were checked
> into fx-team a few hours ago.
>
> While Mercurial (and Git) can handle binary files of this size,
> transferring excessively large files adds overhead to systems and is a
> barrier to contributors on slow connections. We therefore try to limit the
> number of large files checked in to the tree.
>
> Because the commit was noticed quickly and because it hadn't merged to
> other trees yet, I made the rapid decision (with peer review of course) to
> strip this commit from fx-team and to rebase subsequent commits.
>
> If you pulled fx-team in the past few hours, you should run `hg --config
> extensions.strip= strip -r a3924a37fa0e::` and re-pull so your repo is
> consistent with the server.
>
> Bug 1157353 tracks any remaining cleanup.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: DXR 2.0 staged. Feedback please!

2015-06-04 Thread Jeff Muizelaar
It looks like finding of overrides of virtual methods is missing from
DXR 2.0. Is this intentional?

-Jeff

On Wed, Jun 3, 2015 at 3:10 PM, Erik Rose  wrote:
> DXR 2.0 is about to land! This is a major revision touching every part of the 
> system, swapping out SQLite for elasticsearch, and replacing many hard-coded 
> C++ assumptions with a language-independent plugin interface.
>
> Please take it for a spin on the staging server at http://dxr.allizom.org/, 
> and see if you find any regressions from the production version at 
> dxr.mozilla.org. You can file them directly at 
> https://bugzilla.mozilla.org/enter_bug.cgi?product=Webtools&component=DXR&status_whiteboard=es
>  or just reply. Barring showstoppers, we plan to put it into prod within a 
> few weeks.
>
> What's new?
>
> * Improved C/C++ analysis
> * Multi-language support—Python and Rust, for starters, soon to be enabled 
> for moz-central
> * All queries are fast—and will be even faster in prod, once our webheads and 
> elasticsearch servers are colocated
> * Browsing of images
> * Listing of binary files
> * Result counts (so jrudermann can have googlefights)
> * Independent tree indexing, so one build failure won't scuttle updates for 
> the rest of the trees. (This will help us get all the trees currently under 
> MXR indexed.)
> * Parallel indexing so we can set the DC on fire
> * New plugin architecture so we can add new languages, query types, and cross 
> references easily 
> (https://dxr.readthedocs.org/en/es/development.html#writing-plugins)
>
> This is really a backend-focused release, but you can see some of the new 
> possibilities start to leak out. I'm enthusiastic about the features this 
> will enable next: better surfacing of symbols without having to know their 
> type ahead of time, faceted drill-down, context for search results, and 
> permalinks (our last major blocker to decommissioning MXR).
>
> Thanks for helping test it out!
>
> Erik Rose
> DXR Lead
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: Unprivilaged WEBGL_debug_renderer_info

2015-06-15 Thread Jeff Muizelaar
I'm concerned this will discourage websites from reporting WebGL
issues because it will be easier just to block whatever device has the
problem they're running in to. This creates an additional burden on
the web developer and essentially creates the user agent problem all
over again, but at much worse scale because of the wide range of
possible devices. This may be manageable for very large developers
like Google but I don't think it scales across web developers. We are
typically in a better position to control and update any WebGL
blacklist.

I've suggested that creating an easy way to rely diagnostic
information to a website in the event of a problem is a better
solution for improving the overall quality of our WebGL implementation
and sharing that benefit with all websites instead of just benefiting
large properties like Google's.

-Jeff

On Mon, Jun 15, 2015 at 7:16 PM, Jeff Gilbert  wrote:
> Summary:
> The WEBGL_debug_renderer_info extension allows for querying which driver
> (and commonly GPU) a WebGL context is running on. Specifically, it allows
> querying the RENDERER and VENDOR strings of the underlying OpenGL driver.
>
> By default, RENDERER and VENDOR queries in WebGL yield safe but useless
> values. (For example, Gecko returns "Mozilla"/"Mozilla" for
> RENDERER/VENDOR) Queries to UNMASKED_RENDERER_WEBGL and
> UNMASKED_VENDOR_WEBGL yield the RENDERER and VENDOR string of the
> underlying graphics driver. These values are combined to form the "WebGL
> Renderer" field in about:support. On my system, these are:
> * UNMASKED_RENDERER_WEBGL: "ANGLE (NVIDIA GeForce GT 750M Direct3D11 vs_5_0
> ps_5_0)"
> * UNMASKED_VENDOR_WEBGL: "Google Inc." [1]
>
> Bug:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1171228
>
> Link To Standard:
> https://www.khronos.org/registry/webgl/extensions/WEBGL_debug_renderer_info/
>
> Do other browser engines implement this:
> Chrome and IE implement this; Safari does not.
>
> Platform Coverage: All platforms.
>
> Current Target Release: Firefox 41
>
> Related Preferences:
> * "webgl.disable-debug-renderer-info" (default: false): Disable this
> extension for unprivileged content.
> * "webgl.renderer-string-override" (default: ""): Overrides
> UNMASKED_RENDERER_WEBGL query result when non-empty.
> * "webgl.vendor-string-override" (default: ""): Overrides
> UNMASKED_VENDOR_WEBGL query result when non-empty.
>
> Security and Privacy Concerns:
> * Traditional user-agent sniffing concerns. (Known antipattern)
> * This info includes what GPU is being used, which may contribute to
> marketing profiles.
> * This info adds easily-accessible bits of entropy that improve
> fingerprinting, reducing privacy. (Panopticlick and others have
> demonstrated that this is already very effective)
>
> Web Developer Use-Cases:
> * Sites can more easily and immediately identify and address concerns
> caused by specific hardware or drivers. Currently, apps must
> unconditionally workaround an issue until we can ship a fix via browser
> updates.This can mean performance degradation for unaffected machines for,
> sometimes for weeks.
> * Sites can collate and cross-reference drivers and hardware when tracking
> issues both user-reported and auto-detected, which both helps sites
> identify problematic hardware, and helps browsers fix these issues in turn.
> * This allows sites to offer better estimates of performance, and offer
> reasonable defaults for quality settings.
>
> [1] On Windows, we use ANGLE as an intermediary driver on top of D3D, hence
> the VENDOR string being "Google, Inc." and not "NVIDIA" here.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: Unprivilaged WEBGL_debug_renderer_info

2015-06-16 Thread Jeff Muizelaar
A concrete example of this kind of thing occurred a little while ago
with Google Maps. They reported that users on G41 class hardware were
getting distortion when zoomed out in earth mode. This was because of
our switch to D3D11 ANGLE. When we got this report we were able to
reproduce the problem and block D3D11 ANGLE on that hardware. Google
Maps blacklists this hardware when it can detect it using
WEBGL_debug_renderer_info and if we had been exposing
WEBGL_debug_renderer_info we would not have found this problem as
quickly as we did.

We would've been able to find this problem even faster if we had a
better way for users to get this information to websites. The current
process with Google Maps seems to require users to complain on their
forum, for Google to ask them for the output of dxdiag and for them to
manually see a pattern in the output. This is obviously a process with
very low success and it seems like we could do something to make it
better.

-Jeff

On Mon, Jun 15, 2015 at 9:18 PM, Jeff Muizelaar  wrote:
> I'm concerned this will discourage websites from reporting WebGL
> issues because it will be easier just to block whatever device has the
> problem they're running in to. This creates an additional burden on
> the web developer and essentially creates the user agent problem all
> over again, but at much worse scale because of the wide range of
> possible devices. This may be manageable for very large developers
> like Google but I don't think it scales across web developers. We are
> typically in a better position to control and update any WebGL
> blacklist.
>
> I've suggested that creating an easy way to rely diagnostic
> information to a website in the event of a problem is a better
> solution for improving the overall quality of our WebGL implementation
> and sharing that benefit with all websites instead of just benefiting
> large properties like Google's.
>
> -Jeff
>
> On Mon, Jun 15, 2015 at 7:16 PM, Jeff Gilbert  wrote:
>> Summary:
>> The WEBGL_debug_renderer_info extension allows for querying which driver
>> (and commonly GPU) a WebGL context is running on. Specifically, it allows
>> querying the RENDERER and VENDOR strings of the underlying OpenGL driver.
>>
>> By default, RENDERER and VENDOR queries in WebGL yield safe but useless
>> values. (For example, Gecko returns "Mozilla"/"Mozilla" for
>> RENDERER/VENDOR) Queries to UNMASKED_RENDERER_WEBGL and
>> UNMASKED_VENDOR_WEBGL yield the RENDERER and VENDOR string of the
>> underlying graphics driver. These values are combined to form the "WebGL
>> Renderer" field in about:support. On my system, these are:
>> * UNMASKED_RENDERER_WEBGL: "ANGLE (NVIDIA GeForce GT 750M Direct3D11 vs_5_0
>> ps_5_0)"
>> * UNMASKED_VENDOR_WEBGL: "Google Inc." [1]
>>
>> Bug:
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1171228
>>
>> Link To Standard:
>> https://www.khronos.org/registry/webgl/extensions/WEBGL_debug_renderer_info/
>>
>> Do other browser engines implement this:
>> Chrome and IE implement this; Safari does not.
>>
>> Platform Coverage: All platforms.
>>
>> Current Target Release: Firefox 41
>>
>> Related Preferences:
>> * "webgl.disable-debug-renderer-info" (default: false): Disable this
>> extension for unprivileged content.
>> * "webgl.renderer-string-override" (default: ""): Overrides
>> UNMASKED_RENDERER_WEBGL query result when non-empty.
>> * "webgl.vendor-string-override" (default: ""): Overrides
>> UNMASKED_VENDOR_WEBGL query result when non-empty.
>>
>> Security and Privacy Concerns:
>> * Traditional user-agent sniffing concerns. (Known antipattern)
>> * This info includes what GPU is being used, which may contribute to
>> marketing profiles.
>> * This info adds easily-accessible bits of entropy that improve
>> fingerprinting, reducing privacy. (Panopticlick and others have
>> demonstrated that this is already very effective)
>>
>> Web Developer Use-Cases:
>> * Sites can more easily and immediately identify and address concerns
>> caused by specific hardware or drivers. Currently, apps must
>> unconditionally workaround an issue until we can ship a fix via browser
>> updates.This can mean performance degradation for unaffected machines for,
>> sometimes for weeks.
>> * Sites can collate and cross-reference drivers and hardware when tracking
>> issues both user-reported and auto-detected, which both helps sites
>> identify problematic hardware, and helps browsers fix these issues in turn.
>> * This allows sites to offer

GTK3 linux builds

2015-06-16 Thread Jeff Muizelaar
We're working on making all of the tests green for GTK3. This means
that we could be changing the default linux configuration to GTK3 as
early as FF42. If anyone has any reasons for us not to make this
change it would be good to know now. FWIW, I believe Fedora is already
shipping GTK3 builds of Firefox.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: GTK3 linux builds

2015-06-16 Thread Jeff Muizelaar
Is there any reason not to support all the way back to the version of
GTK (3.4) on the test machines?

-Jeff

On Tue, Jun 16, 2015 at 5:11 PM, Mike Hommey  wrote:
> On Tue, Jun 16, 2015 at 04:16:13PM -0400, Jeff Muizelaar wrote:
>> We're working on making all of the tests green for GTK3. This means
>> that we could be changing the default linux configuration to GTK3 as
>> early as FF42. If anyone has any reasons for us not to make this
>> change it would be good to know now. FWIW, I believe Fedora is already
>> shipping GTK3 builds of Firefox.
>
> I depends on what our target GTK3 version would be. If, as recently
> suggested, we go with 3.14 as the minimum supported, that's fairly
> new (9 months old), and switching our builds to GTK3 would make us
> drop support for a lot of people that use older systems.
>
> I thought we'd be shipping both GTK2 and GTK3 builds for a while.
>
> Mike
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: GTK3 linux builds

2015-06-17 Thread Jeff Muizelaar
On Wed, Jun 17, 2015 at 11:22 AM, Benjamin Smedberg
 wrote:
> On 6/16/15 4:16 PM, Jeff Muizelaar wrote:
>>
>> We're working on making all of the tests green for GTK3. This means
>> that we could be changing the default linux configuration to GTK3 as
>> early as FF42.
>
>
> What are the advantages of the GTK3 build?

Modern features on Linux including HiDPI, smooth scrolling, touch
events and better themeing. The deep X dependency is also the cause of
a lot bad graphics performance and causes the Linux builds to lag in
graphics features. This is pain because Linux is currently the best
debugging environment because of rr.

> Is there a list of which
> distros/versions would continue to work and which would stop working?

No, but that's probably worth getting. Were currently testing against
GTK 3.4 which was released in February 2012.

> Do we
> have a plan not to update existing users who would be broken by the new
> builds? I seriously doubt we should spend a lot of time

No. I'm not sure this is worth doing. As I understand it, most people
on linux use distro builds.

> Are there issues with plugin compatibility in GTK3 builds?

No, we have a wrapper that allows plugins to continue using GTK2

> I seriously doubt we want to spend release resources on shipping and doing
> release checks on multiple Linux builds (I think we've even discussed
> dropping Linux x86 support).

That's what I've been hearing and why I'm suggesting that we might be
better off switching our releases to GTK3.

> In case it's not clear, I'm skeptical that we should do this.

Do what? GTK3? Staying with GTK2 is actively causing us pain and I'm
not convinced it's worth avoiding GTK3 to support old linux distros.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::TemporaryRef is gone; please use already_AddRefed

2015-06-30 Thread Jeff Muizelaar
I believe this is predicated on removing the implicit conversion from
nsRefPtr to T*

-Jeff

On Tue, Jun 30, 2015 at 5:28 PM, Robert O'Callahan  wrote:
> Will it ever be possible to eliminate TemporaryRef and use moves instead?
>
> Rob
> --
> oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
> owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
> osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
> owohooo
> osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o
> oioso
> oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
> owohooo
> osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro
> ooofo
> otohoeo ofoioroeo ooofo ohoeololo.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread Jeff Muizelaar
FWIW, I did a quick poll of the people in our Gfx daily. Here are the results:

For aArguments:
 Bas
 Milan
 Matt
 Kats

Against aArguments:
 Me

No strong opinion:
 Sotoro
 Lee
 Benoit
 Nical
 Mason

-Jeff

On Tue, Jul 7, 2015 at 11:12 AM, Nick Fitzgerald
 wrote:
> (Posted this reply to the wrong thread, reposting to the right one... >_<)
>
> One more group of defectors within Mozilla. From the DevTools coding
> standards[0]:
>
> """
>
>- aArguments aAre the aDevil (don't use them please)
>
> """
>
> Although, there are still some files in tree with the legacy style.
>
> [0] https://wiki.mozilla.org/DevTools/CodingStandards#Code_style
>
> On Tue, Jul 7, 2015 at 6:57 AM, Kartikaya Gupta  wrote:
>
>> On Tue, Jul 7, 2015 at 9:18 AM, Honza Bambas  wrote:
>> >> I'd be happy to support
>> >> removing the prefix if people also commit to splitting any giant
>> >> functions they touch as part of the prefix removal.
>> >
>> >
>> > That's (sorry) non-sense.  In almost all cases longer methods/functions
>> > cannot be just split.  It would probably make the code just much harder
>> to
>> > read and maintain (with a lot of new arguments missing the 'a' prefix ;))
>> > and is not necessary.  Not an argument IMHO.
>>
>> Can you point me to a couple of examples of long functions that you
>> think cannot be split reasonably? I'm curious to see what it looks
>> like. Obviously functions with giant switch statements and the like
>> are exceptions and should be treated exceptionally but I would like to
>> see some "regular" functions that can't be split.
>>
>> Cheers,
>> kats
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hash table iterators, and a call for help

2015-07-13 Thread Jeff Muizelaar
I did not see nsTHashtable and nsBasHashtable define stl style
iterators for use in range-based for loops. Is this intentional?

-Jeff

On Mon, Jul 13, 2015 at 1:36 AM, Nicholas Nethercote
 wrote:
> Hi,
>
> Last week I landed patches that remove PL_DHashTableEnumerate() from
> the tree (https://bugzilla.mozilla.org/show_bug.cgi?id=1180072). You
> should now use PLDHashTable::Iterator if you want to iterate over a
> PLDHashTable. The iterator is *so* much nicer -- there's none of that
> "bundle up an environment as a |void*| pointer and pass it in with a
> function pointer" annoyance.
>
> I have also added Iterator classes to each of nsTHashtable and
> nsBaseHashtable (https://bugzilla.mozilla.org/show_bug.cgi?id=1181445)
> and I would like to also eventually remove the enumerate functions
> from these classes. However, there are 500+ calls to those enumerate
> functions so it's going to take a while.
>
> For now, I've filed bugs to get rid of all the
> nsTHashtable::EnumerateEntries() calls, which account for ~160 of
> those 500+. They're all blocking
> https://bugzilla.mozilla.org/show_bug.cgi?id=1181443. If you find
> yourself in the mood for some not-too-taxing refactoring, please feel
> free to take on one or more of the unassigned bugs. The number of
> calls to replace in each bug ranges from one or two up to 21. If you
> have any questions please ask. Thank you.
>
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: GTK3 linux builds

2015-07-20 Thread Jeff Muizelaar
I believe Flash does.

-Jeff

On Sun, Jul 19, 2015 at 11:34 PM, Robert O'Callahan
 wrote:
> Jeff, does Flash with with GTK3 builds?
>
> Rob
> --
> lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
> toD
> selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
> rdsme,aoreseoouoto
> o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea lurpr
> .a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
> esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: GTK3 linux builds

2015-07-20 Thread Jeff Muizelaar
Benjamin,

Do you still have any opposition to the plan suggested by Roc?

-Jeff

On Mon, Jul 20, 2015 at 9:30 AM, Robert O'Callahan  wrote:
> On Tue, Jul 21, 2015 at 1:04 AM, Jeff Muizelaar 
> wrote:
>>
>> I believe Flash does.
>
>
> OK, I can't get it to work, but I think it's just my system.
>
> I verified that Fedora 22 (and I think 21) shipped GTK3 Firefox. If there
> aren't any major blocking bugs for GTK3 >= 3.4, and the tests are green, I
> think we should switch to GTK3 immediately and stop doing GTK2 builds. I
> know the bugs mentioned in the thread above are fixed for GTK >= 3.4.
>
> Rob
> --
> lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
> toD
> selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
> rdsme,aoreseoouoto
> o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea lurpr
> .a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
> esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: GTK3 linux builds

2015-07-20 Thread Jeff Muizelaar
On Mon, Jul 20, 2015 at 6:18 PM, Mike Hommey  wrote:
>
> There are a few remaining perma reds and oranges, FWIW.

Which ones? I don't see anything on elm.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   >