Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Michel Dänzer
On Mon, 2010-04-12 at 10:12 -0700, Jesse Barnes wrote: 
> On Mon, 12 Apr 2010 09:00:57 +0200
> Michel Dänzer  wrote:
> 
> > On Mon, 2010-04-12 at 08:00 +0200, Luca Barbieri wrote: 
> > > The Intel drivers also appear to be in the same situation, with
> > > classic drivers not being dropped in favor of Gallium ones, also
> > > indicating possible Gallium shortcomings leading to this.
> > 
> > The reasons for that are mostly political rather than technical.
> 
> Sorry, couldn't resist this flamebait.
> 
> My message wrt Gallium has been consistent at least, and I know the
> other Intel developers agree with me (though they may have additional
> issues with some of the interfaces specifically).
> 
> Moving to Gallium would be a huge effort for us.  We've invested a lot
> into the current drivers, stabilizing them, adding features, and
> generally supporting them.  If we moved to Gallium, much of that effort
> would be thrown away as with any large rewrite, leaving users in a
> situation where the driver that worked was unsupported and the one that
> was supported didn't work very well (at least for quite some time).  

This may be true now, but only because you guys refused to pick up
Gallium early on. That's what I was referring to, the technical reasons
above are merely consequences of that decision IMHO.

If you had picked it up, the resulting drivers could be expected to be
at least as stable and performant, but definitely provide more features
(e.g. OpenVG support) now. Gallium as a whole would probably be better
for it as well.


> I really wish the move to Gallium had been a more gradual evolution of
> the current code base, since it would have allowed working drivers to
> take advantage of the new infrastructure over time (though not having
> worked with Gallium I won't pretend to suggest how this might have
> worked best).

Indeed, would have been difficult I think given Gallium's goal of a
radically simplified driver interface.


-- 
Earthling Michel Dänzer   |http://www.vmware.com
Libre software enthusiast |  Debian, X and DRI developer
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Dave Airlie
2010/4/13 Michel Dänzer :
> On Mon, 2010-04-12 at 10:12 -0700, Jesse Barnes wrote:
>> On Mon, 12 Apr 2010 09:00:57 +0200
>> Michel Dänzer  wrote:
>>
>> > On Mon, 2010-04-12 at 08:00 +0200, Luca Barbieri wrote:
>> > > The Intel drivers also appear to be in the same situation, with
>> > > classic drivers not being dropped in favor of Gallium ones, also
>> > > indicating possible Gallium shortcomings leading to this.
>> >
>> > The reasons for that are mostly political rather than technical.
>>
>> Sorry, couldn't resist this flamebait.
>>
>> My message wrt Gallium has been consistent at least, and I know the
>> other Intel developers agree with me (though they may have additional
>> issues with some of the interfaces specifically).
>>
>> Moving to Gallium would be a huge effort for us.  We've invested a lot
>> into the current drivers, stabilizing them, adding features, and
>> generally supporting them.  If we moved to Gallium, much of that effort
>> would be thrown away as with any large rewrite, leaving users in a
>> situation where the driver that worked was unsupported and the one that
>> was supported didn't work very well (at least for quite some time).
>
> This may be true now, but only because you guys refused to pick up
> Gallium early on. That's what I was referring to, the technical reasons
> above are merely consequences of that decision IMHO.

No offence to gallium, but I don't think its been mature enough to
ship a driver for as long as Intel have had to ship drivers. I'm not
even sure its mature enough to ship a driver with yet. I know you guys
have shipped drivers using it, but I don't count the closed drivers
since I haven't heard any good news about them, and svga is kinda a
niche case. I've made the point to Keith and TG a long time ago that
we needed an open source show case gallium driver to show how one
should actually look. Either Intel 965 or ATI r600 would have been
perfect targets. This never materialised as important enough. So I
don't think you can blame Intel, the argument for switching to gallium
2 years ago wasn't pervasive at all. Its only coming to be pervasive
now, and my only real interest stems from llvm'ed vertex shaders as a
killer features on non-TCL hw, and for doing some tcl fallbacks fast.

> If you had picked it up, the resulting drivers could be expected to be
> at least as stable and performant, but definitely provide more features
> (e.g. OpenVG support) now. Gallium as a whole would probably be better
> for it as well.

I think it would have put Intel 6 months behind schedule, and it was
just after bugmgr (I typoed bufmgr but it seemed apt) which was
already a massive upset, along with dri vs dri2.

I'd really have liked if the open i915g and i965g had gotten some
serious time allocated, I think the things that might persuade Intel
the upset is worth it going forward, is gettting a useful gallivm/915g
combo on their 945 based netbooks, (i.e. save power and run
googleearth well), then for 965 I'd expect some sort of GS support
would help, but still the regression chasm is wide and i've no idea
how to span that in a distro release cycle.

>
>
>> I really wish the move to Gallium had been a more gradual evolution of
>> the current code base, since it would have allowed working drivers to
>> take advantage of the new infrastructure over time (though not having
>> worked with Gallium I won't pretend to suggest how this might have
>> worked best).
>
> Indeed, would have been difficult I think given Gallium's goal of a
> radically simplified driver interface.
>

I do also wonder if something more akin to making gallium as a driver
plugin instead of drivers plugging into it, i.e. a driver tells
gallium to plug itself into all the mesa dd entrypoints, then can
override any it wants to do itself. Granted it wouldn't have gotten
use away from the GL interface. It would have probably looked like the
meta.c stuff we finally started building up, to make classic less
painful.

Dave
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Luca Barbieri
Has Intel or anyone else considered open sourcing their Windows
DirectX 10 user mode DDI drivers, porting them to Gallium and filling
in the missing GL-specific functionality from the GL drivers?

That might prove easier than porting the GL drivers (the DirectX 10
design is much closer), and allows to take advantage of the Windows
codebase, which is likely to have had the benefit of much more work
done on it.

With the addition of a DX10 state tracker, you could then build Linux
and Windows drivers from the same codebase and join the driver teams,
with obvious benefits.

I think this should be the real advantage of Gallium, from the
perspective of an hardware company: coverage of all APIs (OpenGL,
X11/EXA, DirectX 10, maybe DirectX 9 too) from a single codebase.

The fact that VMware does not release their DirectX state trackers
hampers this somewhat, but they can be independently reimplemented,
and they may be willing to license them to Intel or other companies.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Dave Airlie
On Tue, Apr 13, 2010 at 6:23 PM, Luca Barbieri  wrote:
> Has Intel or anyone else considered open sourcing their Windows
> DirectX 10 user mode DDI drivers, porting them to Gallium and filling
> in the missing GL-specific functionality from the GL drivers?
>
> That might prove easier than porting the GL drivers (the DirectX 10
> design is much closer), and allows to take advantage of the Windows
> codebase, which is likely to have had the benefit of much more work
> done on it.
>
> With the addition of a DX10 state tracker, you could then build Linux
> and Windows drivers from the same codebase and join the driver teams,
> with obvious benefits.
>
> I think this should be the real advantage of Gallium, from the
> perspective of an hardware company: coverage of all APIs (OpenGL,
> X11/EXA, DirectX 10, maybe DirectX 9 too) from a single codebase.
>
> The fact that VMware does not release their DirectX state trackers
> hampers this somewhat, but they can be independently reimplemented,
> and they may be willing to license them to Intel or other companies.

I think if Intel had any cross-OS team it would be a possiblity but
they don't and I'm not sure if they can even manage it. The problem
with releasing a merged OS driver isn't the code as much as the
lawyers I'd suspect.

Dave.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Keith Whitwell
2010/4/13 Dave Airlie :

> No offence to gallium, but I don't think its been mature enough to
> ship a driver for as long as Intel have had to ship drivers. I'm not
> even sure its mature enough to ship a driver with yet. I know you guys
> have shipped drivers using it, but I don't count the closed drivers
> since I haven't heard any good news about them, and svga is kinda a
> niche case. I've made the point to Keith and TG a long time ago that
> we needed an open source show case gallium driver to show how one
> should actually look.

>  Either Intel 965 or ATI r600 would have been
> perfect targets. This never materialised as important enough.

It's a major regret for me that I haven't been able to put more time
into i965g.  I completely agree that having a full-strength hardware
driver for a modern GPU as an example driver, reference
implementation, etc, would be a hugely valuable resource for gallium,
and that the i965 is/was a perfect platform to achieve that.  I'm open
to ideas about how to unstuck that project.

Keith
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa (master): scons: Make debug build default.

2010-04-13 Thread Michel Dänzer
On Sun, 2010-04-11 at 01:23 -0700, Jose Fonseca wrote: 
> Module: Mesa
> Branch: master
> Commit: 21780adc2ed1b10c5c4c71427b8212b8464d065d
> URL:
> http://cgit.freedesktop.org/mesa/mesa/commit/?id=21780adc2ed1b10c5c4c71427b8212b8464d065d
> 
> Author: José Fonseca 
> Date:   Sat Apr 10 02:44:52 2010 +0100
> 
> scons: Make debug build default.
> 
> I've been back and forth on this, but I believe it's worth to have debug
> by default.
> 
> Most humans (developers, testers) will want to use the debug version  by
> default.  Many build bots want release but they are bots, and humans >
> bots, so I don't care that much.
> 
> This is part of my initiative of minimizing the scons option mess many
> complain about.

I wonder if a single boolean option is expressive enough for this
though. E.g., with the traditional DRI drivers, I can build with
--enable-debug and get more or less the same performance but some
debugging features such as assertions[0]. scons debug=1 tends to incur
much more overhead, making it impractical to have it enabled for builds
being used on a day-to-day basis.

[0] Dave Airlie pointed out on IRC that assertions really shouldn't be
restricted to 'debug' builds but should only be disabled for 'release'
builds (with NDEBUG defined).

So maybe there should be a separate 'release' option, and possibly
several levels of debug. Not really sure what makes the most sense.


-- 
Earthling Michel Dänzer   |http://www.vmware.com
Libre software enthusiast |  Debian, X and DRI developer





___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread José Fonseca
On Tue, 2010-04-13 at 00:55 -0700, Dave Airlie wrote:
> No offence to gallium, but I don't think its been mature enough to
> ship a driver for as long as Intel have had to ship drivers. I'm not
> even sure its mature enough to ship a driver with yet. I know you guys
> have shipped drivers using it, but I don't count the closed drivers
> since I haven't heard any good news about them, and svga is kinda a
> niche case. 

First, I don't core about who jumps into the Gallium ship or ignore us.
Certainly the more the merrier, but Gallium is a worthwhile proposition
even if the whole world decides to ignore us.

But I can't let this "gallium is not mature" excuse go unchallenged.

When you say "I'm not sure it [Gallium] is mature enough to ship a
driver with yet", what components exactly are you referring to?

A Gallium GL driver is composed of:
- Mesa. The very same same used on classic drivers. So I suppose you
don't refer to it.
- Mesa state tracker. Quite small. A lot of is quite similar to Mesa
meta.c stuff.
- Gallium interface. Always in flux, granted, but what exactly is it
missing to ship stable driver?
- Auxiliary modules. All optional.
- The pipe driver -- this is *not* Gallium -- just like as a Mesa driver
is not Mesa. And is as stable as people make it.

All things considered, it is way less effort to write and maintain a GL
Gallium driver than a Mesa driver. So if there isn't a stable Gallium
driver for hardware X, Y or Z it is simply because nobody put their back
into making it happen.

Also, the closed drivers that you decided not to count were as stable as
they could be in the allocated time. When we were stabilizing the
Windows GL SVGA driver we fixed loads of *Mesa* bugs, because all the
windows-only applications we tested with that were never tested before.
Actually looking back, most of the bugs we had were in the pipe driver
and Mesa. That is, relatively few were in the components that make the
Gallium infrastructure.

Migrating into Gallium is a time investment. One puts time into it, and
then it expects it pays off: either because of the increased code
sharing, more API support, or the optimizations it has which are only
possible because they extend beyond a single driver. There are several
reasons, but they might not appeal to all.

I can concede that migrating may not have been perceived as an
worthwhile investment by Intel two years ago, today, or in the
foreseeable future. It is for Intel maintainers to decide whether the
pluses are more than the minus.

But please stop trying to find excuses in Gallium for why driver A/B/C
has not migrated.

> I've made the point to Keith and TG a long time ago that
> we needed an open source show case gallium driver to show how one
> should actually look. Either Intel 965 or ATI r600 would have been
> perfect targets. This never materialised as important enough. So I
> don't think you can blame Intel, the argument for switching to gallium
> 2 years ago wasn't pervasive at all. Its only coming to be pervasive
> now, and my only real interest stems from llvm'ed vertex shaders as a
> killer features on non-TCL hw, and for doing some tcl fallbacks fast.

Trying to port drivers to gallium without the de facto maintainers
cooperation is effectively a fork, and if we don't have resources to
sustain the fork then it is bound to die.

Furthermore, I don't think we still need to prove Gallium to anybody.
The architecture makes sense, and there are plenty of proofs it works
for those who want to see. The odds are if we don't do the porting work
then somebody else will eventually do it when there is an itch to
scratch. And we should focus where it really matters at the present.

> I do also wonder if something more akin to making gallium as a driver
> plugin instead of drivers plugging into it, i.e. a driver tells
> gallium to plug itself into all the mesa dd entrypoints, then can
> override any it wants to do itself. Granted it wouldn't have gotten
> use away from the GL interface. It would have probably looked like the
> meta.c stuff we finally started building up, to make classic less
> painful.

Gallium in its essence is the abstraction between the graphics APIs and
graphics HW. What you propose here sounds more like porting some of its
utility code or part of its implementation and provide it as an
auxiliary to Mesa drivers. But doing that would leave nothing of Gallium
essense.

Combining a Mesa classic driver with the Mesa state tracker sounds
technically feasible initially, but in order to share surfaces and
internal state you'll need to share code, hence the class driver will
have to look pretty much as the Mesa state tracker + a pipe driver
anyway. And all the interations between the classic and gallium code
paths will probably introduce more bugs than those they will prevent.

Jose

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


[Mesa-dev] [PATCH] gallium: Remove pipe_screen::update_buffer.

2010-04-13 Thread Chia-I Wu
From: Chia-I Wu 

It has no user after the removal of st_public.  Plus, it has never been
implemented by a pipe driver or winsys.
---
 src/gallium/auxiliary/util/u_simple_screen.h |5 -
 src/gallium/include/pipe/p_screen.h  |7 ---
 2 files changed, 0 insertions(+), 12 deletions(-)

diff --git a/src/gallium/auxiliary/util/u_simple_screen.h 
b/src/gallium/auxiliary/util/u_simple_screen.h
index de6325f..b52232f 100644
--- a/src/gallium/auxiliary/util/u_simple_screen.h
+++ b/src/gallium/auxiliary/util/u_simple_screen.h
@@ -53,11 +53,6 @@ struct pipe_winsys
const char *(*get_name)( struct pipe_winsys *ws );
 
/**
-* Do any special operations to ensure buffer size is correct
-*/
-   void (*update_buffer)( struct pipe_winsys *ws,
-  void *context_private );
-   /**
 * Do any special operations to ensure frontbuffer contents are
 * displayed, eg copy fake frontbuffer.
 */
diff --git a/src/gallium/include/pipe/p_screen.h 
b/src/gallium/include/pipe/p_screen.h
index dd7c35e..06ab4a8 100644
--- a/src/gallium/include/pipe/p_screen.h
+++ b/src/gallium/include/pipe/p_screen.h
@@ -170,13 +170,6 @@ struct pipe_screen {
   unsigned bind_flags);
 
/**
-* Do any special operations to ensure buffer size is correct
-* \param context_private  the private data of the calling context
-*/
-   void (*update_buffer)( struct pipe_screen *ws,
-  void *context_private );
-
-   /**
 * Do any special operations to ensure frontbuffer contents are
 * displayed, eg copy fake frontbuffer.
 * \param winsys_drawable_handle  an opaque handle that the calling context
-- 
1.7.0

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] GLES1/2 and DRI drivers

2010-04-13 Thread Chia-I Wu
On Mon, Apr 12, 2010 at 12:37:10PM -0400, Kristian Høgsberg wrote:
> I've been looking into the GLES1/2 support in mesa and trying to
> figure out how to make it work for DRI drivers as well.  The current
> approach only works for gallium, and it works by compiling mesa core
> as different state trackers.  Each state tracker is just a thin filter
> on top of the public API and in the end, the result is essentially
> three copies of the mesa state tracker that all load the same gallium
> chipset driver to deal with the hardware.  As far as I understand it,
> anyway.
> I would like to propose that we structure the code a bit differently,
> specifically I would like to see a way where we can load one DRI
> driver which can implement multiple GL APIs.  I understand that
> gallium was designed to support mulitple APIs, however, in the case of
> gl/gles1/gles2, there is a big overlap, and we can support all three
> without different state trackers.
> Specifically, what I'm thinking of is
>  - the dri driver gets a new entry point that lets us create a context
> for a specified API (along these lines:
> http://cgit.freedesktop.org/~krh/mesa/commit/?h=gles2&id=707ad2057e5a2ab2e5fa36be77de373ed98967c5)
>  - mesa core becomes multi api aware, struct gl_context gets a new API field
>  - move the es entry points from src/mesa/es into src/mesa/main
These all look good to me.  Two bigger issues I can think of now is the
merge of GLAPI XMLs and get_gen.py.  I never have a chance to look at
gles version of get_gen.py, and I think it might require quite some time
of manual editing to merge.  GLAPI XMLs is less trouble if one is to
create a big dispatch table.  But it might be good to be able to create
a small GLES2 only dispatch table, if configured to.
>  - create src/gles1 and src/gles2 directories for compiling
> libGLESv1.so and libGLESv2.so; basically glapi-es2 as a shared object
> file.
In EGL/Gallium, there are three copies of libmesagallium.a, in libGL.so,
libGLESv1_CM.so, and libGLESv2.so respectively.  In EGL/DRI2, there is
one copy of libmesa.a in each DRI driver.  It is hard to say which is
better since both are not quite right.

It is never a sane idea to break DRI drivers, but I am wondering that,
is it possible to construct libGL*.so in a way that both EGL/Gallium and
EGL/DRI2 will work while you (or we) are at it?

For example, in this proposal, it seems libGLESv2.so will consist of
only glapi-es2.  Comparing it with src/state_trackers/es/, it misses a
symbol `st_module_OpenGL_ES2` whose sole purpose is to create an st_api.
If we can add this symbol and dynamically load a new library (consists
of libmesagallium.a) when requested to create an st_api, then both
EGL/Gallium and EGL/DRI2 would work with this libGLESv2.so.  This will
add a minimum amount of code to libGLESv2.so.  Plus, there will only be
a single copy of libmesagallium.a on the system since all libGL*.so will
load the same library.
> Obviously, we should keep the option to compile mesa state tracker as
> gles1 or gles2 only for example (to allow building a small gles2-only
> dri driver and to keep the current gallium setup working).
Yes, a small gles2-only dri driver/state tracker is preferable.  While
gl_context is made multiple API aware, we may still use FEATURE macros
to disable a good portion of mesa code at compile time, and disable the
creation of APIs that depend on the code.
> This is all still work in progress for me, but I'm curious what people
> think of this approach.

-- 
o...@lunarg.com
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa (master): scons: Make debug build default.

2010-04-13 Thread José Fonseca
On Tue, 2010-04-13 at 02:12 -0700, Michel Dänzer wrote:
> On Sun, 2010-04-11 at 01:23 -0700, Jose Fonseca wrote: 
> > Module: Mesa
> > Branch: master
> > Commit: 21780adc2ed1b10c5c4c71427b8212b8464d065d
> > URL:
> > http://cgit.freedesktop.org/mesa/mesa/commit/?id=21780adc2ed1b10c5c4c71427b8212b8464d065d
> > 
> > Author: José Fonseca 
> > Date:   Sat Apr 10 02:44:52 2010 +0100
> > 
> > scons: Make debug build default.
> > 
> > I've been back and forth on this, but I believe it's worth to have debug
> > by default.
> > 
> > Most humans (developers, testers) will want to use the debug version  by
> > default.  Many build bots want release but they are bots, and humans >
> > bots, so I don't care that much.
> > 
> > This is part of my initiative of minimizing the scons option mess many
> > complain about.
> 
> I wonder if a single boolean option is expressive enough for this
> though. E.g., with the traditional DRI drivers, I can build with
> --enable-debug and get more or less the same performance but some
> debugging features such as assertions[0]. scons debug=1 tends to incur
> much more overhead, making it impractical to have it enabled for builds
> being used on a day-to-day basis.
> 
> [0] Dave Airlie pointed out on IRC that assertions really shouldn't be
> restricted to 'debug' builds but should only be disabled for 'release'
> builds (with NDEBUG defined).
> 
> So maybe there should be a separate 'release' option, and possibly
> several levels of debug. Not really sure what makes the most sense.

Yes, we could have the code optimization flags be controlled
independently from debugging checks via this separate option.

But what to do for the expensive debugging checks? There is no guarantee
that when the debugging checks are enabled the performance will be in
the same order.

Another problem is the build dir name. So far we have only
build/linux-x86 and build/x86-debug. Do we want different names for all
these combinations, and what should they be?

Built time with optimization + debug checks is as slow a optimization
without debug checks. For me having two seperate builds simultanously --
one for debuggin, which builds fast and has debugging checks, other for
performance which needs more time to build -- seems a better setup.

Jose

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Dave Airlie
On Tue, Apr 13, 2010 at 8:01 PM, José Fonseca  wrote:
> On Tue, 2010-04-13 at 00:55 -0700, Dave Airlie wrote:
>> No offence to gallium, but I don't think its been mature enough to
>> ship a driver for as long as Intel have had to ship drivers. I'm not
>> even sure its mature enough to ship a driver with yet. I know you guys
>> have shipped drivers using it, but I don't count the closed drivers
>> since I haven't heard any good news about them, and svga is kinda a
>> niche case.
>
> First, I don't core about who jumps into the Gallium ship or ignore us.
> Certainly the more the merrier, but Gallium is a worthwhile proposition
> even if the whole world decides to ignore us.
>
> But I can't let this "gallium is not mature" excuse go unchallenged.

No open shipping driver on real hardware in any Linux distro == not
mature in my opinion. The interfaces remain unproven on real hw
platforms, the fact the nouveau people are raising issues as we get
closer to shipping stuff is a sign of it. I'll retract the yet
statement I'm still happy with where Gallium is right now, its
probably mature enough now after the last bunches of merges, but long
term we'll see how the interface stand up and regressions come and go.
.
> - Gallium interface. Always in flux, granted, but what exactly is it
> missing to ship stable driver?

Pretty much this. The interface is still under such heavy development,
that I can't say I could have released r300g and QAed in the zones of
stability in master. Yes I could have stuck to the Mesa 7.8 version of
gallium
but then I'd just have to start the process again for 7.9. The thing
is you aren't developing one-off snapshots for contract anymore, we
need something that is sustainable and regression testing friendly so
that we can produce rolling working drivers every 3-6 months with a
minimum of regressions. I think we have gotten a lot closer to this,
but we'll see as we start to actually ship gallium drivers in Linux
distros.

> - Auxiliary modules. All optional.
> - The pipe driver -- this is *not* Gallium -- just like as a Mesa driver
> is not Mesa. And is as stable as people make it.
>
> All things considered, it is way less effort to write and maintain a GL
> Gallium driver than a Mesa driver. So if there isn't a stable Gallium
> driver for hardware X, Y or Z it is simply because nobody put their back
> into making it happen.

For any hardware at all? you really don't see a problem with that?

>
> Also, the closed drivers that you decided not to count were as stable as
> they could be in the allocated time. When we were stabilizing the
> Windows GL SVGA driver we fixed loads of *Mesa* bugs, because all the
> windows-only applications we tested with that were never tested before.
> Actually looking back, most of the bugs we had were in the pipe driver
> and Mesa. That is, relatively few were in the components that make the
> Gallium infrastructure.

As I said SVGA doesn't count its not real hw, it relies on much more
stable host drivers yes, and is a great test platform for running DX
conformance, but you cannot use it as a parallel to real hardware. The
closed drivers were paid for embedded one-offs, no sustained
developement dead ends. So again I can't count them.

>
> Migrating into Gallium is a time investment. One puts time into it, and
> then it expects it pays off: either because of the increased code
> sharing, more API support, or the optimizations it has which are only
> possible because they extend beyond a single driver. There are several
> reasons, but they might not appeal to all.
>
> I can concede that migrating may not have been perceived as an
> worthwhile investment by Intel two years ago, today, or in the
> foreseeable future. It is for Intel maintainers to decide whether the
> pluses are more than the minus.
>
> But please stop trying to find excuses in Gallium for why driver A/B/C
> has not migrated.

No driver has migrated as of yet. Lots of us are investing in
migrating drivers and can see the upside, however I can also see why
Intel remain reticent.

>
> Trying to port drivers to gallium without the de facto maintainers
> cooperation is effectively a fork, and if we don't have resources to
> sustain the fork then it is bound to die.
>
> Furthermore, I don't think we still need to prove Gallium to anybody.
> The architecture makes sense, and there are plenty of proofs it works
> for those who want to see. The odds are if we don't do the porting work
> then somebody else will eventually do it when there is an itch to
> scratch. And we should focus where it really matters at the present.

I still think a real driver is more important than anything else, you
can't showcase something without one. You think Gallium sells itself,
it would have if a real optimised open driver had existed from close
to the start. Its a pity the opportunity was lost, but at this point
maybe enough people are scratching itches. A 915 or 965 driver that
produces similiar speed as the classic driver

Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Michel Dänzer
On Tue, 2010-04-13 at 20:52 +1000, Dave Airlie wrote: 
> On Tue, Apr 13, 2010 at 8:01 PM, José Fonseca  wrote:
> > On Tue, 2010-04-13 at 00:55 -0700, Dave Airlie wrote:
> 
> > Also, the closed drivers that you decided not to count were as stable as
> > they could be in the allocated time. When we were stabilizing the
> > Windows GL SVGA driver we fixed loads of *Mesa* bugs, because all the
> > windows-only applications we tested with that were never tested before.
> > Actually looking back, most of the bugs we had were in the pipe driver
> > and Mesa. That is, relatively few were in the components that make the
> > Gallium infrastructure.
> 
> As I said SVGA doesn't count its not real hw, it relies on much more
> stable host drivers yes, and is a great test platform for running DX
> conformance, but you cannot use it as a parallel to real hardware.

Why not? It looks like a GPU. It acts like a GPU. (Maybe it even smells
like a GPU? :) It must be a GPU.

I agree a showcase driver for real hardware would be preferable, but the
above seems like an unfair dismissal of the svga driver.

> The closed drivers were paid for embedded one-offs, no sustained
> developement dead ends. So again I can't count them.

Actually one of the goals of Gallium was to increase the sustainability
of the efforts under that model, by making more of the code shared /
reusable. I think it's worked out pretty well.


-- 
Earthling Michel Dänzer   |http://www.vmware.com
Libre software enthusiast |  Debian, X and DRI developer
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Dave Airlie
On Tue, Apr 13, 2010 at 9:08 PM, Michel Dänzer  wrote:
> On Tue, 2010-04-13 at 20:52 +1000, Dave Airlie wrote:
>> On Tue, Apr 13, 2010 at 8:01 PM, José Fonseca  wrote:
>> > On Tue, 2010-04-13 at 00:55 -0700, Dave Airlie wrote:
>>
>> > Also, the closed drivers that you decided not to count were as stable as
>> > they could be in the allocated time. When we were stabilizing the
>> > Windows GL SVGA driver we fixed loads of *Mesa* bugs, because all the
>> > windows-only applications we tested with that were never tested before.
>> > Actually looking back, most of the bugs we had were in the pipe driver
>> > and Mesa. That is, relatively few were in the components that make the
>> > Gallium infrastructure.
>>
>> As I said SVGA doesn't count its not real hw, it relies on much more
>> stable host drivers yes, and is a great test platform for running DX
>> conformance, but you cannot use it as a parallel to real hardware.
>
> Why not? It looks like a GPU. It acts like a GPU. (Maybe it even smells
> like a GPU? :) It must be a GPU.

It is close and its definitely be a great help in fixing up r300g in
parts to know how the gallium authors intended a driver to be written.
Well documented interfaces are a good thing, but knowing the intent of
the interface designers is worth a hell of a lot more. The main reason
I'd like to have a seen a real optimised hw driver from the gallium
interface designers would be to show how they intend the interfaces to
be used in an optimised manner.  i915g and i965g had no buffer
optimisation strategy and that made it harder to work out how pb_buf*
and pipe_buffers interaction was meant to be done. Nouveau for example
still hasn't gotten a useful buffer management strategy and I feel a
lot of the 'I want to copy the blob' stuff comes from them not having
a good example to work from.

> I agree a showcase driver for real hardware would be preferable, but the
> above seems like an unfair dismissal of the svga driver.

Its just not the same thing, I spend a bit of time working on virtual
GPUs and I know how different the model is from real hw, esp in terms
of memory access speeds and buffer processing speeds. Though as I said
svga is at least a better example than anything else that went before
it, and I'm quite thankful for it.


>> The closed drivers were paid for embedded one-offs, no sustained
>> developement dead ends. So again I can't count them.
>
> Actually one of the goals of Gallium was to increase the sustainability
> of the efforts under that model, by making more of the code shared /
> reusable. I think it's worked out pretty well.
>

Its just we don't have any examples of this that have worked out due
to the TG->VMware transition making the model less a focus of Mesa
(which is a good thing for everyone ;-)

Dave.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Luca Barbieri
>> As I said SVGA doesn't count its not real hw, it relies on much more
>> stable host drivers yes, and is a great test platform for running DX
>> conformance, but you cannot use it as a parallel to real hardware.
>
> Why not? It looks like a GPU. It acts like a GPU. (Maybe it even smells
> like a GPU? :) It must be a GPU.
>
> I agree a showcase driver for real hardware would be preferable, but the
> above seems like an unfair dismissal of the svga driver.

The problem is that svga does not address the issue of whether the
performance of ad-hoc proprietary OpenGL drivers (nvidia and fglrx)
can be matched with Gallium, unless svga manages to achieve very close
to "bare hardware" performance.

How fast is svga with OpenGL on a Linux guest versus native OpenGL
with the nVidia proprietary drivers? (on released VMware products, so
that it is public information)

Clearly performance is the only issue there: anything else can be
solved by just extending Gallium, but if you discover that a major
portion of CPU time is going to translating OpenGL to Gallium, this
might a require a complex massive refactoring of everything to fix it.

Right now much bigger problems (e.g. memory management) generally make
it impossible to tell by profiling on any hardware, but at some point
this will become clear, possibly with disappointing realizations.

The nv50 driver might be reaching this point, and an attempt was made
to also write a classic mesa driver to compare their performance, but
that effort seems to have been abandoned before it could produce such
information.
CCing Cristoph Bumilller for this.

If you look at Mesa and the Mesa Gallium state tracker from the
perspective of minimizing CPU cycles and cache misses spent in the
drivers, you will likely by struck by the sheer amount of inefficiency
here due to all the useless conversions here wasting CPU time, and the
unnecessary proliferation of objects, some large, in memory causing
all the obvious allocation/cache behavior issue.

And if you read what nVidia has to say on the topic, at
http://developer.nvidia.com/object/bindless_graphics.html, you'll
realize how the Gallium design does not take such concerns in much
regard (except for the idea of using CSOs)

Whether this is relevant or not is unclear, but it is the real concern IMHO.
That will still be fixable, but would require a much more significant
willingness to refactor and rewrite things, and in particular I doubt
the Mesa data structures that classic drivers need could be supported
through this.
Unless of course, one makes the whole issue moot by exposing a
different API than OpenGL, such as DirectX 10, which fits Gallium much
better, but that is an even bigger overall shift in direction.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH] gallium: Remove pipe_screen::update_buffer.

2010-04-13 Thread Keith Whitwell
Looks good to me.

Keith

On Tue, Apr 13, 2010 at 11:22 AM, Chia-I Wu  wrote:
> From: Chia-I Wu 
>
> It has no user after the removal of st_public.  Plus, it has never been
> implemented by a pipe driver or winsys.
> ---
>  src/gallium/auxiliary/util/u_simple_screen.h |    5 -
>  src/gallium/include/pipe/p_screen.h          |    7 ---
>  2 files changed, 0 insertions(+), 12 deletions(-)
>
> diff --git a/src/gallium/auxiliary/util/u_simple_screen.h 
> b/src/gallium/auxiliary/util/u_simple_screen.h
> index de6325f..b52232f 100644
> --- a/src/gallium/auxiliary/util/u_simple_screen.h
> +++ b/src/gallium/auxiliary/util/u_simple_screen.h
> @@ -53,11 +53,6 @@ struct pipe_winsys
>    const char *(*get_name)( struct pipe_winsys *ws );
>
>    /**
> -    * Do any special operations to ensure buffer size is correct
> -    */
> -   void (*update_buffer)( struct pipe_winsys *ws,
> -                          void *context_private );
> -   /**
>     * Do any special operations to ensure frontbuffer contents are
>     * displayed, eg copy fake frontbuffer.
>     */
> diff --git a/src/gallium/include/pipe/p_screen.h 
> b/src/gallium/include/pipe/p_screen.h
> index dd7c35e..06ab4a8 100644
> --- a/src/gallium/include/pipe/p_screen.h
> +++ b/src/gallium/include/pipe/p_screen.h
> @@ -170,13 +170,6 @@ struct pipe_screen {
>                                               unsigned bind_flags);
>
>    /**
> -    * Do any special operations to ensure buffer size is correct
> -    * \param context_private  the private data of the calling context
> -    */
> -   void (*update_buffer)( struct pipe_screen *ws,
> -                          void *context_private );
> -
> -   /**
>     * Do any special operations to ensure frontbuffer contents are
>     * displayed, eg copy fake frontbuffer.
>     * \param winsys_drawable_handle  an opaque handle that the calling context
> --
> 1.7.0
>
> ___
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
>
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread José Fonseca
On Tue, 2010-04-13 at 03:52 -0700, Dave Airlie wrote:
> On Tue, Apr 13, 2010 at 8:01 PM, José Fonseca  wrote:
> > On Tue, 2010-04-13 at 00:55 -0700, Dave Airlie wrote:
> >> No offence to gallium, but I don't think its been mature enough to
> >> ship a driver for as long as Intel have had to ship drivers. I'm not
> >> even sure its mature enough to ship a driver with yet. I know you guys
> >> have shipped drivers using it, but I don't count the closed drivers
> >> since I haven't heard any good news about them, and svga is kinda a
> >> niche case.
> >
> > First, I don't core about who jumps into the Gallium ship or ignore us.
> > Certainly the more the merrier, but Gallium is a worthwhile proposition
> > even if the whole world decides to ignore us.
> >
> > But I can't let this "gallium is not mature" excuse go unchallenged.
> 
> No open shipping driver on real hardware in any Linux distro == not
> mature in my opinion. The interfaces remain unproven on real hw
> platforms, the fact the nouveau people are raising issues as we get
> closer to shipping stuff is a sign of it. I'll retract the yet
> statement I'm still happy with where Gallium is right now, its
> probably mature enough now after the last bunches of merges, but long
> term we'll see how the interface stand up and regressions come and go.
> .
> > - Gallium interface. Always in flux, granted, but what exactly is it
> > missing to ship stable driver?
> 
> Pretty much this. The interface is still under such heavy development,
> that I can't say I could have released r300g and QAed in the zones of
> stability in master. Yes I could have stuck to the Mesa 7.8 version of
> gallium
> but then I'd just have to start the process again for 7.9. The thing
> is you aren't developing one-off snapshots for contract anymore, we
> need something that is sustainable and regression testing friendly so
> that we can produce rolling working drivers every 3-6 months with a
> minimum of regressions. I think we have gotten a lot closer to this,
> but we'll see as we start to actually ship gallium drivers in Linux
> distros.

OK. I admit the interface churn doesn't quite fit into the 3-6 months
release cycle.

> > - Auxiliary modules. All optional.
> > - The pipe driver -- this is *not* Gallium -- just like as a Mesa driver
> > is not Mesa. And is as stable as people make it.
> >
> > All things considered, it is way less effort to write and maintain a GL
> > Gallium driver than a Mesa driver. So if there isn't a stable Gallium
> > driver for hardware X, Y or Z it is simply because nobody put their back
> > into making it happen.
> 
> For any hardware at all? you really don't see a problem with that?

No, I really don't. I see the current state as an historical
consequence, in particular of the projects we happened to work on, and
not a shortcoming in the Gallium architecture itself.

If we had officially worked on migrating a driver for a particular
hardware, or writing one from scratch, and not been able to succeed then
I would accept your critics. 

> > Also, the closed drivers that you decided not to count were as stable as
> > they could be in the allocated time. When we were stabilizing the
> > Windows GL SVGA driver we fixed loads of *Mesa* bugs, because all the
> > windows-only applications we tested with that were never tested before.
> > Actually looking back, most of the bugs we had were in the pipe driver
> > and Mesa. That is, relatively few were in the components that make the
> > Gallium infrastructure.
> 
> As I said SVGA doesn't count its not real hw, it relies on much more
> stable host drivers yes, and is a great test platform for running DX
> conformance, but you cannot use it as a parallel to real hardware. The
> closed drivers were paid for embedded one-offs, no sustained
> developement dead ends. So again I can't count them.

That just adds to my point: even on pseudo hardware as SVGA, I found
many bugs in Mesa proper, in code shared with all the classic Mesa
drivers that you say are stable. The bugs were in no way SVGA related.
They were simply due to the fact many Windows GL apps push the limits
way beyond the apps available in Linux. 

Which is why I think this "mature" reasoning is all moot when talking
about graphics drivers. At the end of the day what matters is what QA'ed
and tested: even a supposedly mature component such as Mesa evidences
many bugs when tested with new apps; and the reverse: with enough QA,
testing and debugging even a supposedly immature component can become
stable rather quickly.

And this is why I state that the fact there are no stable open source
hardware drivers is a consequence of such effort has never been scoped
so far, instead of being a property of Gallium.

> So don't feel like I'm attacking Gallium here, I'm just trying to
> state its a new technology and I'm very happy developing r300g with
> it, but its far from a proven replacement for classic mesa.

Basically I feel you're measuring the work we did and rel

Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Alex Deucher
On Tue, Apr 13, 2010 at 4:23 AM, Luca Barbieri  wrote:
> Has Intel or anyone else considered open sourcing their Windows
> DirectX 10 user mode DDI drivers, porting them to Gallium and filling
> in the missing GL-specific functionality from the GL drivers?

AMD considered opening it's at least part of it's GL stack in the r5xx
days, but the main problem is 3rd party IP.  It would take so long to
review, clean, and fix the code, that it ends up being easier to write
a driver directly.  Additionally, the base drivers are much different,
so it would need quite a bit of work to hook it into the drm, ddx,
etc.  Not to mention, the GL stack relied on a much more featureful
base driver than the drm provided at the time.  It's a lot of work for
not much gain considering the current size of the open source Linux
market.

Alex
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Jesse Barnes
On Tue, 13 Apr 2010 09:36:13 +0200
Michel Dänzer  wrote:
> > Moving to Gallium would be a huge effort for us.  We've invested a lot
> > into the current drivers, stabilizing them, adding features, and
> > generally supporting them.  If we moved to Gallium, much of that effort
> > would be thrown away as with any large rewrite, leaving users in a
> > situation where the driver that worked was unsupported and the one that
> > was supported didn't work very well (at least for quite some time).  
> 
> This may be true now, but only because you guys refused to pick up
> Gallium early on. That's what I was referring to, the technical reasons
> above are merely consequences of that decision IMHO.

No, it was true even as the first Gallium code was landing in the
repo.  Rewriting everything is always painful, and we already had
plenty of other tasks to keep us busy (see Dave's mail) and cause pain
for everyone.  In hindsight, maybe it wouldn't have been any worse than
what we went through, but since the 3D driver is the biggest part of
the stack, throwing away that part seemed like it would be the biggest
amount of work.

Dave's other points are also good ones; Gallium has yet to be proven
with a big, open source, shipping, and supported driver.  I won't
comment on the closed source stuff; I've heard things but haven't
actually worked on it myself, so I have no idea whether there were good
closed source drivers released or not.

-- 
Jesse Barnes, Intel Open Source Technology Center
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Alex Deucher
On Tue, Apr 13, 2010 at 6:01 AM, José Fonseca  wrote:
> On Tue, 2010-04-13 at 00:55 -0700, Dave Airlie wrote:
>> No offence to gallium, but I don't think its been mature enough to
>> ship a driver for as long as Intel have had to ship drivers. I'm not
>> even sure its mature enough to ship a driver with yet. I know you guys
>> have shipped drivers using it, but I don't count the closed drivers
>> since I haven't heard any good news about them, and svga is kinda a
>> niche case.
>
> First, I don't core about who jumps into the Gallium ship or ignore us.
> Certainly the more the merrier, but Gallium is a worthwhile proposition
> even if the whole world decides to ignore us.
>
> But I can't let this "gallium is not mature" excuse go unchallenged.
>
> When you say "I'm not sure it [Gallium] is mature enough to ship a
> driver with yet", what components exactly are you referring to?
>
> A Gallium GL driver is composed of:
> - Mesa. The very same same used on classic drivers. So I suppose you
> don't refer to it.
> - Mesa state tracker. Quite small. A lot of is quite similar to Mesa
> meta.c stuff.
> - Gallium interface. Always in flux, granted, but what exactly is it
> missing to ship stable driver?
> - Auxiliary modules. All optional.
> - The pipe driver -- this is *not* Gallium -- just like as a Mesa driver
> is not Mesa. And is as stable as people make it.
>
> All things considered, it is way less effort to write and maintain a GL
> Gallium driver than a Mesa driver. So if there isn't a stable Gallium
> driver for hardware X, Y or Z it is simply because nobody put their back
> into making it happen.
>
> Also, the closed drivers that you decided not to count were as stable as
> they could be in the allocated time. When we were stabilizing the
> Windows GL SVGA driver we fixed loads of *Mesa* bugs, because all the
> windows-only applications we tested with that were never tested before.
> Actually looking back, most of the bugs we had were in the pipe driver
> and Mesa. That is, relatively few were in the components that make the
> Gallium infrastructure.
>
> Migrating into Gallium is a time investment. One puts time into it, and
> then it expects it pays off: either because of the increased code
> sharing, more API support, or the optimizations it has which are only
> possible because they extend beyond a single driver. There are several
> reasons, but they might not appeal to all.
>
> I can concede that migrating may not have been perceived as an
> worthwhile investment by Intel two years ago, today, or in the
> foreseeable future. It is for Intel maintainers to decide whether the
> pluses are more than the minus.
>
> But please stop trying to find excuses in Gallium for why driver A/B/C
> has not migrated.
>
>> I've made the point to Keith and TG a long time ago that
>> we needed an open source show case gallium driver to show how one
>> should actually look. Either Intel 965 or ATI r600 would have been
>> perfect targets. This never materialised as important enough. So I
>> don't think you can blame Intel, the argument for switching to gallium
>> 2 years ago wasn't pervasive at all. Its only coming to be pervasive
>> now, and my only real interest stems from llvm'ed vertex shaders as a
>> killer features on non-TCL hw, and for doing some tcl fallbacks fast.
>
> Trying to port drivers to gallium without the de facto maintainers
> cooperation is effectively a fork, and if we don't have resources to
> sustain the fork then it is bound to die.
>

I'm sure developers would be interested in cooperating;  AMD certainly
would be.  We would have preferred to do the original r600 3D driver
on gallium, but at the time we needed to support UMS and the legacy
drm.   KMS and memory manager support were not even implemented for
r6xx+ class hardware at that time.  I realize a good memory manager is
not a hard requirement, but it would have been pretty painful.

Alex
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Corbin Simpson
[snip'd]

Two observations:

1) I wrote most of a Gallium driver. By myself. It took OVER 9000
lines of code, but it happened. I'd say that an interface that permits
one mediocre coder armed with docs to craft a working, simple driver
in a couple months (effectively three man-months, by my estimate) is a
roaring success.

2) I worked by myself. Except for occasional patches from the
community (Marek, Joakim, Nicolai) and lately from Dave, the initial
bringup was something I had to do by myself, without assistance.

So what I'm seeing here is a chicken-and-egg problem where Gallium has
no drivers because nobody wants to write drivers for it because its
interface is unproven because it has no drivers... Now that we're
actually having real drivers for real hardware reaching production
quality, I think we can break this cycle and get people to start
contributing to Gallium, or at least bump down to the next level of
reasons why they won't write Gallium code. :3

Not that I'm saying excuses are bad or wrong, but in the end, r300g is
14.7klocs and r300c is 26.9klocs (and yes, I didn't count the shared
shader compiler code), so the goal of "Bring up drivers in less time,
with less code," appears to be achieved. We are almost reaching r300c
performance levels, and beating it handily in certain benchmarks, so
it is possible to write good new drivers on this codebase.

~ C.

-- 
When the facts change, I change my mind. What do you do, sir? ~ Keynes

Corbin Simpson

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


[Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Bridgman, John
>No, it was true even as the first Gallium code was landing in the
>repo.  Rewriting everything is always painful, and we already had
>plenty of other tasks to keep us busy (see Dave's mail) and cause pain
>for everyone.  In hindsight, maybe it wouldn't have been any worse than
>what we went through, but since the 3D driver is the biggest part of
>the stack, throwing away that part seemed like it would be the biggest
>amount of work.

>Dave's other points are also good ones; Gallium has yet to be proven
>with a big, open source, shipping, and supported driver.  I won't
>comment on the closed source stuff; I've heard things but haven't
>actually worked on it myself, so I have no idea whether there were good
>closed source drivers released or not.

We made essentially the same decision as you 18 months ago and implemented the 
initial r6xx/r7xx 3D driver on the "classic" HW driver model rather than 
Gallium3D. Even at the time it seemed highly likely that Gallium3D was going to 
work out well, but between the newness of Gallium3D itself and the work still 
to be done on KMS/DRI2/GEM/TTM there was just too much "new" for my liking. 

We did ask one of our devs (Cooper) to help with the Gallium3D effort and also 
look into video decode using G3D, but unfortunately he got pulled off onto 
another urgent non-driver project shortly afterwards so we didn't end up doing 
much to help at all. Fortunately the other developers pushed ahead without us 
(thanks guys ;)) and it's probably fair to say that our developer focus will 
probably jump across to r600 on Gallium3D fairly soon as well.  

For what it's worth, if we were writing a new "has to work, can't afford 
delays" driver from scratch today we would go with Gallium3D, period. Prior to 
that... I think we were confident it would all work but weren't quite sure how 
long it would take. 

The reality is that we don't have a conveniently timed architectural break to 
force the writing of an all new driver, and I imagine you don't either, so 
we're all going to have to "ooze" across to Gallium3D. The initial code to 
support Evergreen (HD5xxx) GPUs is being implemented on top of the "classic" 
r600 driver because so much of the programming model is common, but from that 
point on I think we would try to push the Gallium3D code ahead rather than 
doing more work on the classic code base.

I'll try to get a statement from our proprietary OpenGL driver team re: 
compatibility profiles -- or, more to the point, deprecating older GL 
functionality. I haven't looked into the issue much myself but my first 
impression was definitely "uh-oh, this is going to be a problem for a bunch of 
our users". 
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Bridgman, John
JB>>The reality is that we don't have a conveniently timed architectural break 
to force the writing of an all new driver, and I imagine you don't either, so 
we're all going to have to "ooze" across to Gallium3D.

OK, file this under "be careful what you wish for"...

It turns out that while the programming model of Evergreen is very similar to 
7xx, the register offsets are totally different, which has been causing a bunch 
of header file pain trying to merge Evergreen support into the existing r600 
driver. 

As a result of this discussion, we're thinking about changing plans a bit - 
making a copy of the r600 driver and hacking it up to be Evergreen only, then 
seeing if we can use that code to jump-start an Evergreen Gallium3D driver 
sooner rather than later. I guess we got our architectural break after all, 
just not the way I expected.

We'll need to figure out how this would co-exist with the work that Jerome and 
Corbin are doing - I'm thinking of it as a "quick and dirty" proof of concept 
driver that might live alongside the 600g code and eventually be replaced by it 
- whatever works. Part of the rationale here is that we have the same 
register-shuffling problem with the ddx driver so we might end up with a new 
copy of the accel code anyways - if so it might be a good time to play with 
using Gallium3D calls for EXA and Xv. 

We're obviously not very far into this and I wouldn't normally mention anything 
this soon if I hadn't just posted something *different* an hour ago ;)

Regarding the "Gallium vs Classic" interfaces for Mesa, it seems to me that the 
medium term plan should be to fork off a copy of Mesa for pre-Gallium3D drivers 
and limit it to GL2 or lower (the non-shaderful chips can't do GL2 anyways, can 
they ?) and then let Mesa evolve as a Gallium3D-only state tracker. This would 
*have* to be done on a schedule that allowed all of the existing "shader-based 
GPU on classic mesa" drivers (Intel, AMD, probably others) to comfortably 
migrate to Gallium3D-based drivers, which might not be fast, but at least there 
would be a plan. I wouldn't want to see support for our "classic" 3D drivers go 
away too quickly either.

I'm a bit out of touch on the GL3 support going in now, so it's not clear 
whether this is Gallium3D only or whether GL3 on the classic driver is 
practical. If GL3 is going to be Gallium3D only then I assume it's just a 
matter of time before all the active drivers move over, and the key is finding 
the right point to start cutting over ? I know our decision to go with 
"classic" Mesa drivers for 6xx/7xx was not easy and it's probably not going to 
be any easier for the Intel folks to make a decision to start moving to 
Gallium3D.

Anyways, does this all make sense ? I think things will work best if we all 
ooze across to Gallium3D at more or less the same time (say within 6 months at 
most) and I don't mind trying to push our plans ahead or holding them back a 
bit to make sure that we end up with a Gallium3D abstraction that works for 
everyone.

BTW I received an out-of-office bounce from our OpenGL architect, so probably 
won't hear back about deprecating older GL functionality until next Monday. 
I'll ask some other folks but I would rather get the definitive word from 
Pierre.

JB

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev



Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Keith Whitwell
I'm much more relaxed about the future of Gallium these days.  I don't
think there's any sense in pushing people or projects towards it -
people are welcome to evaluate it on its merits and make their own
decisions on that basis.

The project itself is clearly on a strong footing.  We've shown we can
correct from poor design choices without disrupting the entire stack,
and keep the interface a dynamic, evolving entity even with so much
built up around it.  And though there's clearly room for improvement
in how we (and I) deal with newcomers, I hope we've demonstrated we
can incorporate new ideas and new voices without letting go of the
fundamental idea of the project.

Though I'm sure it will provoke intense discussion, the next time
someone figures out we've made some fundamental miscalculation in the
design of gallium, and have the patience to convince us of it, I have
no doubt that it will be possible to resolve the issue and carry on
stronger.

As far as this thread goes, I think it's probably fine that there's
some chance to put voice to past anxieties & remember old
disagreements, but there's fundamentally too much good work to be done
in this space to spend a lot of time worrying about that stuff.

Keith

On Tue, Apr 13, 2010 at 8:16 PM, Bridgman, John  wrote:
>>No, it was true even as the first Gallium code was landing in the
>>repo.  Rewriting everything is always painful, and we already had
>>plenty of other tasks to keep us busy (see Dave's mail) and cause pain
>>for everyone.  In hindsight, maybe it wouldn't have been any worse than
>>what we went through, but since the 3D driver is the biggest part of
>>the stack, throwing away that part seemed like it would be the biggest
>>amount of work.
>
>>Dave's other points are also good ones; Gallium has yet to be proven
>>with a big, open source, shipping, and supported driver.  I won't
>>comment on the closed source stuff; I've heard things but haven't
>>actually worked on it myself, so I have no idea whether there were good
>>closed source drivers released or not.
>
> We made essentially the same decision as you 18 months ago and implemented 
> the initial r6xx/r7xx 3D driver on the "classic" HW driver model rather than 
> Gallium3D. Even at the time it seemed highly likely that Gallium3D was going 
> to work out well, but between the newness of Gallium3D itself and the work 
> still to be done on KMS/DRI2/GEM/TTM there was just too much "new" for my 
> liking.
>
> We did ask one of our devs (Cooper) to help with the Gallium3D effort and 
> also look into video decode using G3D, but unfortunately he got pulled off 
> onto another urgent non-driver project shortly afterwards so we didn't end up 
> doing much to help at all. Fortunately the other developers pushed ahead 
> without us (thanks guys ;)) and it's probably fair to say that our developer 
> focus will probably jump across to r600 on Gallium3D fairly soon as well.
>
> For what it's worth, if we were writing a new "has to work, can't afford 
> delays" driver from scratch today we would go with Gallium3D, period. Prior 
> to that... I think we were confident it would all work but weren't quite sure 
> how long it would take.
>
> The reality is that we don't have a conveniently timed architectural break to 
> force the writing of an all new driver, and I imagine you don't either, so 
> we're all going to have to "ooze" across to Gallium3D. The initial code to 
> support Evergreen (HD5xxx) GPUs is being implemented on top of the "classic" 
> r600 driver because so much of the programming model is common, but from that 
> point on I think we would try to push the Gallium3D code ahead rather than 
> doing more work on the classic code base.
>
> I'll try to get a statement from our proprietary OpenGL driver team re: 
> compatibility profiles -- or, more to the point, deprecating older GL 
> functionality. I haven't looked into the issue much myself but my first 
> impression was definitely "uh-oh, this is going to be a problem for a bunch 
> of our users".
> ___
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
>
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Keith Whitwell
On Tue, Apr 13, 2010 at 11:47 PM, Keith Whitwell
 wrote:
> I'm much more relaxed about the future of Gallium these days.  I don't
> think there's any sense in pushing people or projects towards it -
> people are welcome to evaluate it on its merits and make their own
> decisions on that basis.

Hmm, on gmail this is threaded as if a comment on John's "be careful
what you wish for" post - which wasn't the intention.   My own fault
for top-posting.

If there does emerge an idea of coordinated movement of drivers onto
gallium, that is definitely something I'd like to support...

Keith
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Bridgman, John
> On Tue, Apr 13, 2010 at 11:47 PM, Keith Whitwell 
>  wrote:
..
> Hmm, on gmail this is threaded as if a comment on John's "be careful
> what you wish for" post - which wasn't the intention.   My own fault
> for top-posting.

Probably my fault - I subscribed to the list midway through the discussion
and had no idea how to hook my message into the thread properly. Still
struggling with wrapping lines as well, as you can see ;)
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Mesa/Gallium overall design

2010-04-13 Thread Bridgman, John
John Bridgman wrote :
> OK, file this under "be careful what you wish for"...
> 
> It turns out that while the programming model of Evergreen is 
> very similar to 7xx, the register offsets are totally 
> different, which has been causing a bunch of header file pain 
> trying to merge Evergreen support into the existing r600 driver. 
> 
> As a result of this discussion, we're thinking about changing 
> plans a bit - making a copy of the r600 driver and hacking it 
> up to be Evergreen only, then seeing if we can use that code 
> to jump-start an Evergreen Gallium3D driver sooner rather 
> than later. I guess we got our architectural break after all, 
> just not the way I expected.
> 
> We'll need to figure out how this would co-exist with the 
> work that Jerome and Corbin are doing - I'm thinking of it as 
> a "quick and dirty" proof of concept driver that might live 
> alongside the 600g code and eventually be replaced by it - 
> whatever works. Part of the rationale here is that we have 
> the same register-shuffling problem with the ddx driver so we 
> might end up with a new copy of the accel code anyways - if 
> so it might be a good time to play with using Gallium3D calls 
> for EXA and Xv. 
> 
> We're obviously not very far into this and I wouldn't 
> normally mention anything this soon if I hadn't just posted 
> something *different* an hour ago ;)
> 
> Regarding the "Gallium vs Classic" interfaces for Mesa, it 
> seems to me that the medium term plan should be to fork off a 
> copy of Mesa for pre-Gallium3D drivers and limit it to GL2 or 
> lower (the non-shaderful chips can't do GL2 anyways, can they 
> ?) and then let Mesa evolve as a Gallium3D-only state 
> tracker. This would *have* to be done on a schedule that 
> allowed all of the existing "shader-based GPU on classic 
> mesa" drivers (Intel, AMD, probably others) to comfortably 
> migrate to Gallium3D-based drivers, which might not be fast, 
> but at least there would be a plan. I wouldn't want to see 
> support for our "classic" 3D drivers go away too quickly either.
> 
> I'm a bit out of touch on the GL3 support going in now, so 
> it's not clear whether this is Gallium3D only or whether GL3 
> on the classic driver is practical. If GL3 is going to be 
> Gallium3D only then I assume it's just a matter of time 
> before all the active drivers move over, and the key is 
> finding the right point to start cutting over ? I know our 
> decision to go with "classic" Mesa drivers for 6xx/7xx was 
> not easy and it's probably not going to be any easier for the 
> Intel folks to make a decision to start moving to Gallium3D.
> 
> Anyways, does this all make sense ? I think things will work 
> best if we all ooze across to Gallium3D at more or less the 
> same time (say within 6 months at most) and I don't mind 
> trying to push our plans ahead or holding them back a bit to 
> make sure that we end up with a Gallium3D abstraction that 
> works for everyone.
> 
> BTW I received an out-of-office bounce from our OpenGL 
> architect, so probably won't hear back about deprecating 
> older GL functionality until next Monday. I'll ask some other 
> folks but I would rather get the definitive word from Pierre.
> 
> JB

Realized afterwards that my post about "oozing to Gallium3D
together" was kind of ambiguous - I meant "moving within 6
months of each other" not "moving within 6 months of today".

Also fixed up line spacing, sorry about that.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


[Mesa-dev] [PATCH] u_blitter: add missing sampler_state->normalized_coords = 1 (how can this be?!?)

2010-04-13 Thread Luca Barbieri
It's using normalized texcoords, but not setting it in the sampler state.

How can this possibly work with r300g though?
Am I missing something?

Perhaps r300g compensates with another bug that causes it to ignore the
request to use unnormalized texcoords?
---
 src/gallium/auxiliary/util/u_blitter.c  |1 +
 src/gallium/drivers/nvfx/nv40_fragtex.c |2 +-
 2 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/src/gallium/auxiliary/util/u_blitter.c 
b/src/gallium/auxiliary/util/u_blitter.c
index 104cbf7..ba599e1 100644
--- a/src/gallium/auxiliary/util/u_blitter.c
+++ b/src/gallium/auxiliary/util/u_blitter.c
@@ -168,6 +168,7 @@ struct blitter_context *util_blitter_create(struct 
pipe_context *pipe)
sampler_state->wrap_s = PIPE_TEX_WRAP_CLAMP_TO_EDGE;
sampler_state->wrap_t = PIPE_TEX_WRAP_CLAMP_TO_EDGE;
sampler_state->wrap_r = PIPE_TEX_WRAP_CLAMP_TO_EDGE;
+   sampler_state->normalized_coords = 1;
/* The sampler state objects which sample from a specified mipmap level
 * are created on-demand. */
 
diff --git a/src/gallium/drivers/nvfx/nv40_fragtex.c 
b/src/gallium/drivers/nvfx/nv40_fragtex.c
index 289070e..69bc00b 100644
--- a/src/gallium/drivers/nvfx/nv40_fragtex.c
+++ b/src/gallium/drivers/nvfx/nv40_fragtex.c
@@ -125,7 +125,7 @@ nv40_fragtex_set(struct nvfx_context *nvfx, int unit)
 
txf  = ps->fmt;
txf |= tf->format | 0x8000;
-   txf |= ((pt->last_level + 1) << NV40TCL_TEX_FORMAT_MIPMAP_COUNT_SHIFT);
+   txf |= ((pt->last_level  + 1) << NV40TCL_TEX_FORMAT_MIPMAP_COUNT_SHIFT);
 
if (1) /* XXX */
txf |= NV34TCL_TX_FORMAT_NO_BORDER;
-- 
1.7.0.1.147.g6d84b

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


[Mesa-dev] glean pointSprite test

2010-04-13 Thread Dave Airlie
So I've finished the r300g point sprite code and this test fails,
fglrx fails the exact same way.

http://people.freedesktop.org/~airlied/piglit/fglrx/fglrxr500/test_glean__pointSprite.html

It appears the point size where the texture sampler decides to use the
next mipmap is different than the swrast/softpipe.

Any GL experts tell me if this is actually well specified or hw specific?

Dave.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev