Re: How to get two monitors working in Linux?

2011-12-28 Thread Aaron Plattner

On 12/27/2011 06:34 PM, monitorxxx wrote:

Hi, I have Arch Linux in this system:

-video card: nvidia 6150
-monitor 1: LG, 1440x900 (right side)
-monitor 2: Samsung, 1360x760 (left side)
-desktop environment: Lxde

But the problem is that:

1) no matter how hard I try, write to xorg.conf, set up and reboot, the
monitors' positions are always changed in NVIDIA X Server Settings (the
graphical tool).


Does the physical position of the screens swap too?  I.e., does the 
blank monitor change when you restart X?


There was a bug fixed in version 290.10 of the driver that would cause 
displays to get swapped when restarting X servers, but it should have 
only affected secondary GPUs.


The nvidia-settings control panel may not show the correct positions of 
the screens in separate X screen mode because the X server doesn't 
provide a way for X clients to know their relative positions.  My patch 
to fix it ballooned into a much larger change during code review and I 
never got around to finishing it.



2) the screen now (Lxde) is shown in monitor 1 (ok), but monitor 2 is
completely black  and when I put the mouse cursor there, it changes to the
defauld Gnome mouse cursor (the 'X') and when clicking with the right button
anywhere in the screen (of monitor 2) nothing happens.

I want both monitors running the same Desktop Environment (in this case,
Lxde), with "separate screens" and when a program is executed in one screen,
it does not open in the other. And of course, to be configured in the
positions they are on the table (Samsung at the left, LG at the right).


I'm not familiar with LXDE, but desktop environments have recently been 
losing proper support for multiple X screens.  For example, KDE 3 worked 
great with multiple X screens but KDE 4 shows the 
black-screen-and-giant-X-cursor problem you're describing with LXDE.  I 
would suggest contacting LXDE's support community to ask for proper 
multiple X screen support.


Desktop environments aren't the only ones simply ignoring multiple X 
screens.  For example, Chrome has had a bug since at least 2009 that 
prevents you from opening new windows on the correct screens:


http://code.google.com/p/chromium/issues/detail?id=15781

I believe Firefox still has a similar problem, though I haven't tried it 
recently.



Do you know how to solve this problem?

What additional information or log files do you need?

Thanks!

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: How to edit Drawable/XdbeBackBuffer data?...

2012-08-21 Thread Aaron Plattner

On 08/21/2012 03:56 PM, Lee Fallat wrote:

Alright, well I've decided to take another approach. I'm XGetImage()ing
the root window, then I darken my pixels, then, I go to store the data
in a Drawable via XPutImage, but this Drawable isn't retaining the data,
it's just white when I XCopyArea() over to what I want shown. Any
suggestions/reasons why this is happening?

Code sample:

static XImage *xi;

void XTintBG(Monitor *m, int x, int y, int w, int h) {
   unsigned long new_color;
   unsigned long r,g,b;

   XCopyArea(dpy, root, m->barwin, dc.gc, x, y, w, h, x, y);
   xi = XGetImage(dpy, m->barwin, x, y, w, h, AllPlanes, XYPixmap);


You almost certainly want ZPixmap rather than XYPixmap.  ZPixmap is your 
normal n-bits-per-pixel, pixels in scanline order image format. 
XYPixmap is the oh-god-kill-me-now image format.


The rest of the code seems reasonable, so try fixing that first.

If you need better performance and are willing to make your code even 
more complicated, you could bind the drawable in question using 
GLX_EXT_texture_from_pixmap, either as a pixmap directly, or by using 
the Composite extension to redirect the window you want and using its 
backing pixmap.  Then you can do much more complicated rendering using 
it as an OpenGL texture.  This unfortunately does not work on the root 
window but you could copy the root window's contents into a pixmap and 
then use that.


Adam's suggestion of using Render would be simpler if you can find a 
blending op that does what you want.


-- Aaron


   for(int pix_y = 0; pix_y < h; pix_y++) {
 for(int pix_x = 0; pix_x < w; pix_x++) {
   new_color = XGetPixel(xi, pix_x, pix_y);
   r = g = b = 0x;
   r = (new_color & 0x00FF) >> 16;
   g = (new_color & 0xFF00) >> 8;
   b = (new_color & 0x00FF) >> 0;
   r -= 32;
   g -= 32;
   b -= 32;
   if(r > 255) r = 255;
   if(r < 0) r = 0;
   if(g > 255) g = 255;
   if(g < 0) g = 0;
   if(b > 255) b = 255;
   if(b < 0) b = 0;
   new_color = r << 16;
   new_color = new_color | (g << 8);
   new_color = new_color | (b << 0);
   XPutPixel(xi, pix_x, pix_y, new_color);
 }
   }

   XPutImage(dpy, m->barwin, dc.gc, xi, 0, 0, 0, 0, m->ww, bh);
   XCopyArea(dpy, m->barwin, dc.bg , dc.gc, 0, 0, m->ww,
bh, 0, 0);
}

int ftime = 1;

void
drawbar(Monitor *m) {
   int x;
   unsigned int i, occ = 0, urg = 0;
   XftColor *col;
   Client *c;

   if(ftime == 1) {
   ftime = 0;
   XTintBG(m, 0, 0, m->ww, bh);
   }

   XCopyArea(dpy, dc.bg , dc.drawable, dc.gc, 0, 0, m->ww,
bh, 0, 0);



}

On Mon, Aug 20, 2012 at 3:17 PM, Adam Jackson mailto:a...@redhat.com>> wrote:

On 8/20/12 2:08 PM, Lee Fallat wrote:

Hey,

I'm trying to darken/lighten the image data in a
Drawable/XdbeBackBuffer.
Any ideas on how to get access to the data?...I've done an
XGetImage() on
Drawable but that really slows down the application I'm editing...


That is, in fact, how you do it.  GetImage is not fast.  ShmGetImage
is faster, and as fast as you're going to get, if you insist on
doing this by pulling all the pixels down to the client and then
pushing them back up.

You may instead wish to use the Render extension, which gives you
the usual set of Porter-Duff blend operations, and which runs in the
server so you're not copying all that data around.

- ajax




___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: xorg 7.6, nvidia multiseat completely broken?

2012-11-11 Thread Aaron Plattner

On 11/11/12 06:34, Ditmar Unger wrote:

Hello,

things are getting worse than ever. Until OpenSuSE 12.1 I had a working
dualseat configuration with separated mice and keyboards for years. Now,
with SuSE 12.2, xorg 7.6, two nvidia cards (610GT and 430GT) and the
newest driver nvidia-computeG02-304.64-22.1.x86_64 I cannot even start
two x Servers any more, no matter if I use one xorg.conf as shown below
or two xorg.conf.seat[1,2] as recommended in
http://wiki.gentoo.org/wiki/Multiseat because when the display cable is
connected to the second seat the X-Server now just crashes and the
computer needs a reboot.

The error messages in /var/log/Xorg.0.log says:

NVIDIA(GPU-1): Failed to allocate EVO core DMA push buffer

Caught signal 11 (Segmentation fault). Server aborting

and if you google this, you find in
http://www.nvnews.net/vbulletin/showthread.php?t=185042

nvidia saying "Multiseat configurations are not supported, I'm afraid.
Sorry."


This is correct, it's not supported, but reports like this one do help 
us prioritize feature requests.



This is incredible, who the hell does such stupid things and: why?

So after frustrating hours of playing with different graphic cards and
configurations all I know is that I should never have updated the
working configuration; the rest is nothing but bull shit.

Now, for the very last chance before giving up with multiseat on Linux -
are there any hints of wise old Scandinavian men left?


I doubt it will help, and I don't want to imply that we're going to 
support these configurations, but does setting


  Option "ProbeAllGpus" "off"

in both xorg.conf files help?

-- Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: Is it possible to force a monitor to stay in the server layout?

2012-12-21 Thread Aaron Plattner

On 12/21/12 06:29, Sven Berkvens-Matthijsse wrote:

Hello,

I can't seem to find an answer to my question anywhere on the web.
Either my use case is weird, or I'm not searching correctly :-)

I have X.org server 1.11.3 running on Ubuntu. I'm using the radeon
driver with a Radeon HD6450. I have three monitors connected to the card
(all three monitors have a DVI input), and I've set up the server to
have all three monitors in one large desktop. This works brilliantly!

Sometimes, I need to connect one of the monitors to another PC or other
equipment with a HDMI or DVI output. My problem is that if I pull out
one of the cables that is connected to the HD6450, that the card (and
the X server) will detect that the monitor has been disconnected from
the card. Consequently, the desktop size is modified and all my Gnome
panels will be jumbled. If I reconnect the monitor, it comes back
online, but the positioning of the windows and panels is not restored to
what it used to be.


It's the X clients' job to effect that sort of policy.  In your case, 
it's probably gnome-settings-daemon.  You could try killing it as an 
experiment to see if the behavior goes away, or logging into a simpler 
desktop environment like Fluxbox that doesn't have a monitor policy daemon.


I don't know offhand if gnome-settings-daemon has an option to turn off 
automatic reconfiguration of the screens.



What I would like is to be able to tell the X server that I don't care
whether monitors are disconnected or not, and that it should keep the
desktop size fixed, no matter which monitors are connected. My X
configuration file includes a full server layout section, and also
contains the resolutions of the monitors. Therefor, it should be
possible, I guess, I just don't know how. Perhaps the current software
is not able to handle this specific situation. I can live with it if the
monitors need to be connected at the time that the X server starts. As
long as I can disconnect them later on without the server layout and
desktop size changing.


The nvidia driver has a "UseHotplugEvents" option you can use to 
suppress the RandR events that clients like gnome-settings-daemon listen 
for, to work around this sort of problem.  Maybe the radeon driver has 
something similar?


--
Aaron


I realize that most people want the behavior that the X server exhibits
currently, because in their case it will probably be an external monitor
that is connected to a laptop on occasion. So I probably want something
odd :-)

Any help or pointers will be greatly appreciated. If the software is
currently not capable of what I've attempted to describe, I'd also like
to know, of course.



---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


[ANNOUNCE] xrandr 1.4.0

2013-02-12 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

xrandr provides a primitive command line interface to the X11 Resize, Rotate,
and Reflect (RandR) extension.

New features:
 * Support for RandR 1.4's provider objects.  See the --listproviders,
   --setprovideroutputsource, and --setprovideroffloadsink options for more
   information.
 * --set now allows a comma-separated list of values.  This is useful with the
   Border property to configure different border adjustments for different edges
   of the screen.
 * --scale-from, which specifies a scaling transform based on the desired
   desktop size.
 * --query now indicates which output, if any, is primary.

It also contains a number of bug fixes.

One notable behavior change is that the --gamma option now takes the actual
gamma value rather than the reciprocal of the gamma value.  This matches the
behavior of other programs such as xgamma and the gamma configuration options in
xorg.conf.

Aaron Plattner (12):
  Add a --scale-from option
  xrandr: Fix string constness bugs
  man: document provider options
  xrandr: make providers a first-class citizen
  xrandr: look for providers by name or xid
  xrandr: Fix variable declaration warnings
  Bug #11397: check that numeric --orientation arguments are in range
  Bug #14118: print usage() to stdout, proper errors for bad arguments
  Bug #29603: document that there might be multiple preferred modes
  Bug #37043: adjust refresh rates for doublescan and interlace
  Cast XID to unsigned int to suppress a printf warning
  xrandr 1.4.0

Adam Jackson (2):
  Document the rarer --newmode flags in --help output
  configure: Drop AM_MAINTAINER_MODE

Alan Coopersmith (3):
  config: Add missing AC_CONFIG_SRCDIR
  Mark fatal() and warning() as taking printf-style arguments
  Fix -Wformat warnings about passing longs where ints were expected

Andy Ritger (5):
  xrandr: use 1/gamma to compute gamma-correction
  xrandr: fix gamma == 1.0 && sigbits != 8
  xrandr: compute gamma-correction in [0,2^sigbits)
  xrandr: extend '--set' syntax to allow a comma-separated list of values
  xrandr: generalize output property printing

Colin Walters (1):
  autogen.sh: Honor NOCONFIGURE=1

Dave Airlie (1):
  xrandr: add provider interfaces

Eric S. Raymond (1):
  Running text interspersed with options prevents DocBook translation; 
remove.

Jeremy Huddleston (1):
  Include strings.h for strcasecmp

Keith Packard (3):
  xrandr: Preserve current mode when switching crtcs
  Update keystone program to run with new nichrome bits
  keystone.5c: cairo-5 box semantics changed default layout

Pierre-Loup A. Griffais (2):
  xrandr: move transform limit checking after scaling
  xrandr: print primary output

git tag: xrandr-1.4.0

http://xorg.freedesktop.org/archive/individual/app/xrandr-1.4.0.tar.bz2
MD5:  4d68317238bb14a33c0e419233d57d87  xrandr-1.4.0.tar.bz2
SHA1: 01bdbe3905e19fad93fe9fcb6185f16d22ad33b2  xrandr-1.4.0.tar.bz2
SHA256: a76b004abe6fd7606eba9ad161ac6391fe5c665708cc5fb7c7ea7d36459d9693  
xrandr-1.4.0.tar.bz2

http://xorg.freedesktop.org/archive/individual/app/xrandr-1.4.0.tar.gz
MD5:  99624bb743e96721307e2fa91f649dd6  xrandr-1.4.0.tar.gz
SHA1: 492255ed2af5597f280eac697f9f46caf650c5f0  xrandr-1.4.0.tar.gz
SHA256: c29a1030bc693ce1c618de35dfa8eb88989b9de477822e9c7ddb5053e382ace8  
xrandr-1.4.0.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iQIcBAEBAgAGBQJRGsQmAAoJEBvvPYQBpohhqdAP/AxDtyxKGBnHa0VMeTT+hNR7
qqPQJmX5x5FsGOGAKeB16kz6mfO7X2muxyzs/32K0nwMYBA+JPi1/1indi6ifTgJ
lBwQ/nuBmN9wrOKIM41R/fbfFhv7C5o4q/9ok2Im+gq8ipvm19wamQ4yGYxkyifO
Iv4iJ9q5KfgO/bKYfD0/b/2HUf5tmxgu4J37tnxPlKrTQR56epqBKQkP/YuWTDxm
S04qKlH8K9ZnIbXPCOjU4kx55pvbjnH2qOLDmv2BlbPZDn5PsTPmeGgKmEomogTP
IDGo3FUl/yT7YKdDV/FN0wGdvrMwAcn8MfW5raAXYap1B5awvWlzbA3tnmlVTZcE
UrTgywqcgRuTccflGPKINyirvTV9ASNkMGGzap4uRWdoZGzXHz/Vc4X8V8IYlcUc
6YGoT6NOE5GcKUgYxfzJFwl/Ya5JJY3eogGg0Bgwo0OzrgitZlXD8/WD1yDyJlwJ
a5TuoxawF1PyDXmeDljQYfEIU248RyaqSt0f3kzEQYllfBIMt9ru8CqG1w8owxQn
RZNTODCg+JMczaABaNps2KyUEq3ZGDw7qDk9xvost/yFCvjSuswQjJxcT+/inHcF
Um4W8PO3wGujQMAHhC2IQ+VBE8obfs6yGzBqlYz8JyWa3E/CZDwIZ4eOdyQqcwf+
ngnfBD6wIh9PYA2cqPUn
=SXdI
-END PGP SIGNATURE-

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: X display locking

2013-03-07 Thread Aaron Plattner

On 03/07/2013 06:31 AM, Torsten Jager wrote:

Hello!

What is the proper usage of XLockDisplay () / XUnlockDisplay ()
when an application has multiple threads using

   * "normal" Xlib functions
   * Xitk functions
   * libGL and/or
   * libvdpau ?


XLockDisplay / XUnlockDisplay is only required when you need multiple 
requests to be atomic with respect to requests being sent by other 
threads.  For example, if you have a function like


XGrabServer()
XGetImage()
XUngrabServer()

then you'll probably want to bracket the whole thing with XLockDisplay / 
XUnlockDisplay if you have another thread that could otherwise perform 
rendering during the grab or destroy the window you're trying to 
GetImage or something.



On my machine (libX11 1.4.0) bracketing all 4 seems to be necessary
to avoid lockups and stack corruption. At least doing so does
work here.


You shouldn't get lockups unless you take the lock in one thread and 
don't release it.  You did call XInitThreads() as the very first thing, 
right?



Hovever, on my mate's box the same code traps both libGL and
libvdepau into an infinite sched_yield () polling loop.


Sounds like a bug.


What am I doing wrong?


My guess would be calling XInitThreads too late.  You have to call it 
before anything else, including libraries like libGL and libvdpau, make 
any Xlib calls.



Torsten


--
Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: X display locking

2013-03-12 Thread Aaron Plattner

On 03/12/2013 11:52 AM, Torsten Jager wrote:

Applications should never call those functions - they are Xlib's
internal locking mechanism for the request buffers.



Applications should only call XInitThreads() to set up the locks
before any Xlib calls are made.


I think Allen must have been thinking of the Xlib internal LockDisplay 
and UnlockDisplay functions.  However, those functions aren't necessary 
unless you need multiple requests from one thread to be atomic w.r.t. 
other threads using the same display connection.  For your use case, it 
sounds like you don't need them.



Thank you for answering.

You are confusing me. My manpage says

   "It is only necessary to call this function if multiple threads
might use Xlib concurrently."

In other words, it _is_ necessary im my case.


You shouldn't get lockups unless you take the lock in one thread and
don't release it.  You did call XInitThreads() as the very first thing,
right?


Yes.


Hovever, on my mate's box the same code traps both libGL and
libvdepau into an infinite sched_yield () polling loop.



Sounds like a bug.


Well, my mate just reported the issue gone with new nvidia drivers
304.84 and 313.26. Funny though, I did not have that problems with
310.32 here.

Anyway, I #define'd out all calls except for the XInitThreads ()
right before the initial XOpenDisplay (). Verified this by looking
at the symbol import tables of the resulting binaries.
Then, everything was fine - as long as I kept moving the mouse
pointer over the applications output window. When I stopped,
something like this happened:

Thread 17 (Thread 0xb639eb70 (LWP 12037)):
#0  0xe430 in __kernel_vsyscall ()
#1  0xb73cb75e in poll () from /lib/libc.so.6
#2  0xb7273470 in _xcb_conn_wait (c=0x8114bb8, cond=0xb639e190, vector=0x0,
 count=0x0) at xcb_conn.c:313
#3  0xb7274db7 in xcb_wait_for_reply (c=0x8114bb8, request=4291, e=0xb639e22c)
 at xcb_in.c:378
#4  0xb7565d58 in _XReply (dpy=0x8125538, rep=0xb639e290, extra=0, discard=0)
 at xcb_io.c:533
#5  0xb754af76 in XGetWindowProperty (dpy=0x8125538, w=31458047, property=287,
 offset=0, length=2147483647, delete=0, req_type=287, 
actual_type=0xb639e30c,
 actual_format=0xb639e310, nitems=0xb639e318, bytesafter=0xb639e314,
 prop=0xb639e31c) at GetProp.c:61
#6  0x080c112d in xitk_is_window_iconified (display=0x8125538, window=31458047)
 at window.c:68
#7  0x08083df5 in slider_loop (dummy=0x0) at panel.c:444
#8  0xb747db25 in start_thread () from /lib/libpthread.so.0
#9  0xb73d646e in clone () from /lib/libc.so.6

Thread 15 (Thread 0xb2a1db70 (LWP 12039)):
#0  0xe430 in __kernel_vsyscall ()
#1  0xb7482125 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/libpthread.so.0
#2  0xb7552e74 in _XConditionWait (cv=0x8125fc8, mutex=0x81274c8) at 
locking.c:353
#3  0xb7565d15 in _XReply (dpy=0x8125538, rep=0xb2a1d0bc, extra=0, discard=0)
 at xcb_io.c:527
#4  0xb52ff220 in ?? () from /usr/lib/libGL.so.1
#5  0xb6bd1ff4 in ?? ()
from /usr/local/lib/xine/plugins/2.2/xineplug_vo_out_opengl2.so
Backtrace stopped: previous frame inner to this frame (corrupt stack?)

OK I am a xine developer.

Using the unchanged Kaffeine frontend (which works with the locking
around libGL) is even worse off - heavy input activity is needed to break
the freeze.

The stack corruption message probably is just a result of gdb losing track
inside proprietory libGL.

But even with no locking from my side, we got stuck here.

I think of 2 possible causes:

   * the "applications should not use" policy had been added later than my
 libX11 version, or

   * KDE 4.4 does interfere somehow. I guess it has to, at least to
 intercept Alt-Tab and similar stuff even if the app itself does not
 even link against kde or Qt libs.

Any ideas?


What version of XCB are you using?  There were a significant number of 
thread-related problems introduced when libX11 first switched to using 
XCB as a backend.  I'd suggest using a libX11 built without XCB, but 
doing that has gotten a lot harder on recent distributions.


This deadlock kind of sounds like this one:
http://cgit.freedesktop.org/xcb/libxcb/commit/?id=23911a707b8845bff52cd7853fc5d59fb0823cef

--
Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: xorg.conf: setting up scaled desktop ?

2013-03-25 Thread Aaron Plattner

On 03/25/2013 07:40 AM, Toerless Eckert wrote:

I've got this old projector that has 1366x768 native resolution,
but will not correctly display VGA/computer signal at native resolution,
and 1280p resolution has stupid overscan.

The way how i solved this under windows with NVidias drivers is
to output 1080p video signal, which the projector will accept, and
then have a desktop resolution of 720p and configure underscan such
that it will be quite accurately displayed 1:1 on the projector.
This trick doesn't work natively with the NVidia drivers, instead
i can only achieve this by display cloning, where VGA is the primary
display set to 720p resolution, and then the video signal is cloned to
the YUV connector set to 1080p  resolution.

So, i was wondering if it is possible to configure something like this
in Xorg. Even if this is also only possible by cloning and appropriately
setting underscan.

Any exampe configs with nvidia drivers for cloning and underscan ?


Hi Toerless,

Please direct questions about the NVIDIA graphics drivers to 
linux-b...@nvidia.com.


You should be able to achieve the sort of configuration you're looking 
for by using a combination of the ViewPortIn and ViewPortOut attributes 
described in the README:


http://http.download.nvidia.com/XFree86/Linux-x86/313.26/README/configtwinview.html#MetaModes

--
Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


[ANNOUNCE] xrandr 1.4.2

2014-03-28 Thread Aaron Plattner
xrandr is a command line interface to the X11 Resize, Rotate, and
Reflect (RandR) extension.

This minor bugfix release restores the ability to disconnect
providers from each other using "0x0" as the provider XID.  For
example, to disconnect a display offload sink from its source,
use

  xrandr --setprovideroutputsource  0x0

This release also formats the GUID provided by DisplayPort 1.2
displays in traditional GUID form.

Finally, this release increases the precision of refresh rate
calculations to disambiguate modes with very similar refresh
rates and to improve the accuracy for interlaced modes.

Aaron Plattner (5):
  Split output property printing into a helper function
  Move EDID printing into a helper function
  Special-case printing of the GUID property
  xrandr: document how to disconnect RandR 1.4 providers
  xrandr 1.4.2

Dave Airlie (1):
  xrandr: allow disconnecting of offload and outputs

Ville Syrjälä (2):
  xrandr: Use more decimal places when printing various rates
  xrandr: Use floating point for VTotal when calculating refresh rate

git tag: xrandr-1.4.2

http://xorg.freedesktop.org/archive/individual/app/xrandr-1.4.2.tar.bz2
MD5:  78fd973d9b532106f8777a3449176148
SHA1: e9f67b83ecac5a2485fdf89b13f4a076961139e1
SHA256: b2e76ee92ff827f1c52ded7c666fe6f2704ca81cdeef882397da4e3e8ab490bc

http://xorg.freedesktop.org/archive/individual/app/xrandr-1.4.2.tar.gz
MD5:  a4a9457a3a5fef7f17a17f31148fea21
SHA1: 056347ac9c0ed084ae9a7a68ebd0bb60dbe3cad7
SHA256: ea21efda9d9b8db416ffdc9b46995f2d41500a37a5419591290385aba98d0a73
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

[ANNOUNCE] xrandr 1.4.3

2014-08-01 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

xrandr is a command line interface to the X11 Resize, Rotate, and Reflect
(RandR) extension.

This minor bugfix release fixes gamma ramp calculations on GPUs with unusual
gamma table configurations, removes redundant "Setting reflection" messages when
- --verbose is specified, adds the ability for the -x and -y switches to be used
to undo previous reflections, and adds the missing --brightness option to the
- --help usage summary.

Aaron Plattner (1):
  xrandr 1.4.3

Connor Behan (2):
  Remove duplicate printing of the axis
  Allow -x and -y switches to undo themselves

Dominik Behr (1):
  xrandr: use full range for gamma table generation

Stéphane Aulery (1):
  Mention of --brightness with -h option

Thomas Klausner (1):
  Remove unnecessary parentheses.

git tag: xrandr-1.4.3

http://xorg.freedesktop.org/archive/individual/app/xrandr-1.4.3.tar.bz2
MD5:  441fdb98d2abc6051108b7075d948fc7  xrandr-1.4.3.tar.bz2
SHA1: 30fde46b9ed9f3fa8a9c05837723306299f62a37  xrandr-1.4.3.tar.bz2
SHA256: 7154ac3486b86923692f2d6cdb2991a2ee72bc32af2c4379a6f1c068f204be1b  
xrandr-1.4.3.tar.bz2
PGP:  
http://xorg.freedesktop.org/archive/individual/app/xrandr-1.4.3.tar.bz2.sig

http://xorg.freedesktop.org/archive/individual/app/xrandr-1.4.3.tar.gz
MD5:  17fa4a70aa90e76a89bbce5ba1cdee25  xrandr-1.4.3.tar.gz
SHA1: 39be6b2e82146364f65db8fb1c448d97da2afea8  xrandr-1.4.3.tar.gz
SHA256: 902f62acd64da03a54127fde58f42ea79b55d8e6a7c30539645e6460cbcb865d  
xrandr-1.4.3.tar.gz
PGP:  http://xorg.freedesktop.org/archive/individual/app/xrandr-1.4.3.tar.gz.sig
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iQIcBAEBAgAGBQJT3IZNAAoJEBvvPYQBpohhTIUQALOJHYDkQU0eoGiA6CUSvjQE
JA5wfNX9wI6lrz63LlxHCFItrHwIHkEOD2f7wNjcrAQXtd+Gnbx+0ZxrjH0BakyU
Q3db6AQmqJDok83w5oWu8VuhG5uZKMFnrFckHdvg4MtjQ8sKYXLAaz0IV2dV8+IO
7ZlWZJwFVORf9gunP0mu71JrCXy3GVSoTAa5dp0mzYy7aOax+w01o2LqOACrcBHv
QlRwuIedpZOm886kcHHseCa8BBn/YDWa5eOMwWHRxmvlk9py0zudlqmiBleL6ztV
hpFih1yZqYOb71DKjvHvTihVzd7SP5CmlI4qi9+YC7Dr9p13eqZyO2pdr0+QkoZQ
f/k1HS3jX8n5k6iaNdv/OqRlZl+9srTCxyEgd6C60UQW+UFeI+Umk7alV7ZxiT4l
PpMYGVo1afpaP/7KZzEcEDKDRaV9zz+aMt/8XD9hASfDnDIIFjGa6YjwnK9IGMzm
rxwPARJ9csL3G7jW97Y5gQah6mJ7xmvta6/MIVT+IMspGE26IOHuBQYATBSqV0JV
Pgt9BBi0i36mwrEKtyEsS3tqwoaMnxST8rWs8XLCLdARD660jqHibJ1UM6mDNxsp
irY7V4sacDqK1dv0Ov8BkLlsAAI3SjSmVBD581/mrbcH3aEEC1lYue/fzWUOn7fQ
NfHJ4Qpn6Nl+Cmm6o7rW
=zYko
-END PGP SIGNATURE-

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: Multihead (3 Montiors) Sabrent USB->HDMI problems

2014-10-06 Thread Aaron Plattner

On 10/06/2014 04:44 PM, daidoji70 wrote:

Hello xorg,
I wasn't who to mail so I thought I'd mail this mailing list and
hopefully I can be pointed in the right direction.  At work I have been
given 2 monitors to go along with my EliteBook 8540w (laptop).  I have
gotten one of the external monitors to work with the DisplayLink
connection but learned last week that as my laptop's graphic card
"01:00.0 VGA compatible controller: NVIDIA Corporation GT216GLM [Quadro
FX 880M] (rev a2)" only has two CRTC's I'd be restricted to only 2
monitors (which seems a waste as I have 3).

So I bought a Sabrent USB-HRHD USB->HDMI adapter.  Today with the help
of a very nice person on #xorg for a moment via unplugging/plugging and
unloading udlfb and forcing udl to load, got the monitor to show up,
could move windows to the display but my mouse would not move there (it
would just like stop on the middle monitor).

However, while trying to troubleshoot that issue, I rebooted the machine
and since then I have not been able to pick that provider up again
(although the udl module seems to load just fine).

Logs and information that seems to be relevant when talking to people on
xorg below.  Please let me know if you need any other information or can
point me in the right direction.

Thanks in advance.


Are you plugging in the USB device before, or after starting the X 
server?  I haven't debugged why it happens, but I've had trouble getting 
the X server to recognize the device unless it's hotplugged while the 
server is already running.


--
Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


Re: using uinput connect X to proprietary (TCP based) keyboard endpoint

2014-10-23 Thread Aaron Plattner

On 10/16/2014 08:12 AM, Peter Hutterer wrote:

On Thu, Oct 16, 2014 at 04:48:37AM +0200, arne.ad...@t-online.de wrote:

Hi,
I am trying to integrate a proprietary keyboard, sending linux scancodes via 
TCP.
My idea is to use uinput to forward the received keycodes to locally running 
applications (including the x server).
In my xorg.conf I have the following section:

Section "InputDevice"
 # to enable user defined virtual keyboard
 Identifier "Keyboard1"
 Option "Device" "/dev/input/event14"
 Driver "evdev"
EndSection
where event14 is the event queue associated to the uinput simulated "device".
I do see the scancodes sent from my device with both commands:
- xinput test-xi2 --root
-  showkey -s
However I am not able to intercept the keyboard events in this simple X 
application

int main(int argc, char** argv)
{
   Display* display = XOpenDisplay(NULL);
   Window window = XCreateSimpleWindow(display, RootWindow(display, 0), 1, 1, 
500, 500,
   0, BlackPixel(display, 0), 
BlackPixel(display, 0));
   XSelectInput(display, window, KeyPressMask | KeyReleaseMask);
   XMapWindow(display, window);



add a XFlush() here, that should do the trick.


XNextEvent implicitly flushes.


Cheers,
Peter


   XEvent report;
   while (1)
   {
 XNextEvent(display, &report);
 switch (report.type)
 {
 case KeyRelease:
   printf("got a KeyRelease event: %d, %d\n", report.xkey.keycode, 
report.xkey.state);
   break;
 case KeyPress:
   printf("got a KeyPress event: %d, %d\n", report.xkey.keycode, 
report.xkey.state);
   break;
 default:
   printf("got a %d event\n", report.type);
   break;

 }
   }
   XFlush(display);
   sleep(5);
   return (EXIT_SUCCESS);
}


--
Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


Re: how can I identify which video card on a mutli-card linux box a display belongs to

2014-11-21 Thread Aaron Plattner

On 11/10/2014 12:56 AM, Chris Wilson wrote:

On Mon, Nov 10, 2014 at 04:31:59PM +0800, Zhang Fan wrote:

Hi all,
I'm developing a video matrix system for Linux.  Typically multiple
vendors' video card and/or multiple cards of same vendor/model might
be used in one system.
I can list the display by using 'xrandr' program and read the
display card information from /sys/class/drm/ directory. But I can't
figure out how to relate these two kinds information. i.e, which
card/port does a display belongs to?
'xrandr --verbose' command lists the 'Identifier' of each display
but it seems volatile and might change after reboot and has nothing
to do with the information in /sys/class/drm.


If your devices support DRI2/DRI3, you can send a DRI request to query
the device node for a screen. Might be a nice extension to xrandr
--verbose, or perhaps a new dri[23]info.


Seems like that would make a pretty good standard property for RandR 
Provider objects.


In general, there's no 1-1 mapping from an X screen to a GPU.  E.g., 
with the NVIDIA driver with SLI enabled, multiple GPUs are used to 
render one screen.  With various forms of PRIME, multiple GPUs can be 
used to handle various aspects of rendering or scanout on a single X screen.



-Chris


--
Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: combining Render's Composite + SHM's PutImage

2015-04-16 Thread Aaron Plattner

On 04/13/2015 02:33 AM, Nigel Tao wrote:

On Fri, Apr 10, 2015 at 10:23 PM, Nigel Tao  wrote:

Even where SHM CreatePixmap works, I can only seem to create a
depth-24 pixmap, which defeats the purpose of alpha-blending if the
shared-memory image's alpha channel is implicitly fully opaque. If I
try to create a depth-32 pixmap, I get a Bad Match (8) error. I
noticed that the Screen's RootVisual (0x20, see xdpyinfo snippet
below) that I passed to CreateWindow corresponded to a depth-24
VisualInfo, so I tried passing different VisualInfos to CreateWindow
(either 0x21 or 0x60), but got another Bad Match (8) error from
CreateWindow.


SHM pixmaps are only allowed if the driver enables them.  It's the 
application's job to check before trying to create one.  In NVIDIA's 
case, we disabled them because they can't be accelerated by the GPU and 
are generally terrible for performance.


You can query it with "xdpyinfo -ext MIT-SHM"

I'm not sure why you're using shared memory to begin with.  Especially 
if you're just doing alpha blending, you're almost certainly much better 
off using OpenGL or the X11 RENDER extension to let the GPU do the 
graphics work.


At least for NVIDIA, you're going to need to copy the pixels into video 
RAM at some point anyway.  If you can upload the pixels to the GPU once 
and then leave them there, that's your best bet.



Ah, creating depth-32 pixmap and pictures works... once I set a
colormap, which makes sense.

Also, I seem to need a border pixel (or border pixmap), which is less
obvious to me.
http://stackoverflow.com/questions/3645632/how-to-create-a-window-with-a-bit-depth-of-32

One last thing: by using a depth-32 visual and colormap, I no longer
get expose events when my window manager moves my window off-screen.
Instead, the previously painted pixels are restored. I guess this
makes sense, since the window's pixels (of depth 32) are no longer
shared with the screen's pixels (of depth 24). However, I'm worried
about having many such windows, all taking up memory even if I
minimize or iconify them. Is there a way for my program (the X client)
to tell the X server to drop the backing pixmap when idle / minimized?


The X server will use the Composite extension automatically to redirect 
the contents of your window into a backing pixmap when its depth doesn't 
match the depth of its parent.  There's no way around this because the 
windows have different pixel formats.


Generally, you only want to use the 32-bit visual if you expect the 
alpha channel of your window to be used by a composite manager to blend 
your window with whatever's below it.  If you're just doing alpha 
blending yourself in order to produce opaque pixels to present in a 
window, you should use a 24-bit visual and do your rendering using 
OpenGL or an offscreen 32-bit pixmap.


--
Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: combining Render's Composite + SHM's PutImage

2015-04-20 Thread Aaron Plattner

On 04/16/2015 06:56 PM, Nigel Tao wrote:

On Fri, Apr 17, 2015 at 5:53 AM, Aaron Plattner  wrote:

SHM pixmaps are only allowed if the driver enables them.  It's the
application's job to check before trying to create one.  In NVIDIA's case,
we disabled them because they can't be accelerated by the GPU and are
generally terrible for performance.

You can query it with "xdpyinfo -ext MIT-SHM"


Ah, SHM QueryVersion will do this programatically. Thanks.



I'm not sure why you're using shared memory to begin with.  Especially if
you're just doing alpha blending, you're almost certainly much better off
using OpenGL or the X11 RENDER extension to let the GPU do the graphics
work.


Yes, I want to use Render. I also want to avoid copying millions of
pixels between X client and X server processes via the kernel, so I
want to use SHM too.



At least for NVIDIA, you're going to need to copy the pixels into video RAM
at some point anyway.  If you can upload the pixels to the GPU once and then
leave them there, that's your best bet.


Ah, so what ended up working for me is to create a new (regular,
non-SHM) Pixmap, call SHM PutImage to copy the pixels to the Pixmap,
then use Render with that Pixmap as source.


Yes, that sounds like the right approach to me.


For NVIDIA, does a (server-side) Pixmap always mean video RAM and not
general purpose RAM? Either way, it works for me, but it'd help my
mental model of what actually happens on the other side of the X
connection.


It's not always the case, but that's a good mental model. The driver 
will kick pixmaps out of video RAM and into system RAM for a variety of 
reasons, but it'll generally move it back to video RAM if you try to use it.



Generally, you only want to use the 32-bit visual if you expect the alpha
channel of your window to be used by a composite manager to blend your
window with whatever's below it.  If you're just doing alpha blending
yourself in order to produce opaque pixels to present in a window, you
should use a 24-bit visual and do your rendering using OpenGL or an
offscreen 32-bit pixmap.


Yeah, I eventually got it working without any Bad Match errors. My
window contents needed its own (depth-24) Visual (i.e. the Screen's
RootVisual), GContext and Pictformat, and my source pixels (an
offscreen pixmap that may or may not be a SHM pixmap) separately
needed its own (depth-32) Visual, GContext, Pictformat and Colormap.
I'm not sure if there's a better method, but I also made an unmapped
1x1 depth-32 Window just get that GContext. It all makes sense, in
hindsight. It just wasn't obvious to me in foresight.


You should be able to create a GC for a Pixmap directly, rather than 
using a dummy window. Or am I misunderstanding what your dummy window is 
for?


It might make sense to do your rendering code using a library like Cairo 
that can take care of the behind-the scenes X11 work for you.



I'd like to avoid using OpenGL if possible. I'm using the Go
programming language, and can write a pure Go (no C) program that uses
the equivalent of XCB. IIUC, using OpenGL requires using Xlib instead
of XCB (or an equivalent of XCB), and I'd prefer to avoid depending on
Xlib and any potential issues with mixing goroutines (Go green
threads, roughly speaking) and OpenGL and Xlib's threading model.


___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: libxrandr XRRGetCrtcInfo() misinterpretation by nouveau or NVIDIA?

2015-06-19 Thread Aaron Plattner
On 06/19/2015 03:49 AM, Thomas Richter wrote:
> Am 19.06.2015 um 12:38 schrieb Chris Wilson:
>> Why is it that in the presence of randr1.3 XRRGetCrtcInfo()  returns
>> semantically(!) different information
>> than XRRGetCrtcInfo() with randr1.2 only? In the former case, it returns
>> the entire panning area. In the latter case,
>> it returns only the visible monitor size.
>> RR1.3 introduced panning. Without panning, the visible area of the CRTC
>> is exactly defined by the mode, rotation, transformation and offset.
>>
>> Basically nvidia have only implemented half of the extension. They allow
>> you to set panning, but don't report it back to the application.
>> -Chris
>>
> Thank you, now that makes sense! IOW, the panning I have with randr1.2
> here on NVIDIA is a proprietary extension
>  to randr1.2 that is not officially present and was added "on top"
> without implementing the 1.3 interface that actually
> communicates panning information.

Yes, the panning support there has been around for many, many years and
certainly predates RandR 1.2.  Getting the semantics of that wired up to
what RandR 1.3 expects was a little tricky, but it should have been
implemented in the 319.* driver series.  I'll see if I can find some
time to reproduce the problem and get a bug filed.

> Ok, so source of the bug identified. NVIDIA. If I only had a chance to
> talk to some engineer there and not just the
> average "support" people...
> 
> Thanks for your patience!
> 
> Thomas

-- 
Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: Diagnosing first vs subsequent performance

2016-01-20 Thread Aaron Plattner
My guess is that each X server you start is switching to its own VT.
Since you're running Xorg by itself, there are initially no clients
connected.  When you run an application such as glxinfo that exits
immediately, or kill your copy of glxgears, it causes the server to
reset, which makes it initiate a VT switch to itself.  Only the X server
on the active VT is allowed to touch any of the hardware, so the other X
servers revoke GPU access whenever the one you touched last grabs the VT.

You can work around this problem somewhat by using the -sharevts and
-novtswitch options to make the X servers be active simultaneously, but
please be aware that this configuration is not officially supported so
you might run into strange and unexpected behavior.

On 01/19/2016 08:03 AM, Lloyd Brown wrote:
> Hi, all. 
> 
> I hope this isn't too dumb of a question, but I'm having trouble finding
> anything on it so far.  Not sure if my google-fu is just not up to the
> task today, or if it's genuinely an obscure problem.
> 
> I'm in the middle of setting up an HPC node with 2 NVIDIA Tesla K80s (4
> total GPUs), for some remote rendering tasks via VirtualGL.  But I've
> got some strange behavior I can't yet account for, and I'm hoping
> someone can point me in the right direction for diagnosing it.
> 
> In short, for accounting reasons, we'd prefer to have each GPU be
> attached to a separate Xorg PID.  So I've built some very simple
> xorg.conf files (example attached), and I can launch Xorg instances with
> a simple syntax like this:
> 
>> Xorg :0 -config /etc/X11/xorg.conf.gpu0
> 
> When I run my tests, I'm also watching the output of "nvidia-smi" so I
> can see which Xorg and application PIDs, are using which GPUs.
> 
> The first time I do something like "DISPLAY=:0.0 glxgears", I do *not*
> see that process (eg. glxgears) show up in the output of "nvidia-smi",
> and I see performance numbers consistent with CPU-based rendering.  If I
> cancel (Ctrl-C), and run the exact same command again, I *do* see the
> process in the output of "nvidia-smi", on the correct GPU, and I see
> faster performance numbers consistent with GPU rendering.
> 
> If I switch to a different display (eg "DISPLAY=:3.0"), I see the same
> behavior: slow the first time, fast on 2nd and subsequent instances. 
> The same behavior even repeats when I switch back to a previously-used,
> but not most-recently-used, DISPLAY.
> 
> I see similar behavior with other benchmarks (eg. glxspheres64,
> glmark2): slow first time on a display, faster after that.
> 
> I have a sneaking suspicion that I'm just doing something really stupid
> with my configs, but right now I can't find it.  I don't see anything
> relevant in the Xorg.log files, or stdout/stderr from the servers, but I
> can post those too, if needed.
> 
> Any pointers where to go from here, would be appreciated.
> 
> Thanks,
> Lloyd
> 
> 
> Other (possibly relevant) Info:
> OS Release: RHEL 6.6
> Kernel: 2.6.32-504.16.2.el6.x86_64
> Xorg server 1.10.4 (from RHEL RPM)
> NVIDIA Driver 352.55
> 
> Note: The attached example is for only one GPU.  The others configs are
> exactly the same, with the exception of the PCI BusID, inside the GPU
> device section.  I can verify via nvidia-smi, that the separate Xorg
> PIDs are attached to the correct GPUs.

-- 
Aaron
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: XCompositeNameWindowPixmap vs using window directly

2016-02-10 Thread Aaron Plattner
On 02/09/2016 12:48 PM, adlo wrote:
> I am writing a window switcher application using GTK, Cairo, and Xlib that 
> shows previews of windows.
> 
> Assuming that I don't want to save the window's image for later use, what are 
> the advantages of using XCompositeNameWindowPixmap () compared to simply 
> using the X11 Window directly?

Windows can be resized or destroyed at any time, regardless of whether
you're trying to use them.

Naming the window pixmap gives you a reference to that particular
instance of the window's backing pixmap.  That means that the pixmap
can't be freed while you're using it, and pixmaps can't be resized.
When a redirected window is resized, its backing pixmap is replaced by a
new one.  It's your application's responsibility to select for and
handle the appropriate events to know when the backing pixmap is
replaced and to switch to the new backing pixmap as necessary, but you
don't need to worry about the pixmap disappearing while you're using it.

The other benefit of using the backing pixmap is that you can bind it
into OpenGL using the GLX_EXT_texture_from_pixmap extension.  That
extension doesn't work with windows, mostly because windows can be
resized and destroyed at any time.

The biggest downside to using XCompositeNameWindowPixmap() is that it
requires the window to be redirected.  Redirecting windows can incur
significant overhead just to display them, which might outweigh the
benefits of your window preview app.  Depending on how you want your UI
to work, your window switcher app might need to be a composite manager,
at which point you're probably better off trying to build it into an
existing compositor framework such as mutter or compiz rather than
trying to write your own.

-- Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

[ANNOUNCE] xrandr 1.5.0

2016-02-23 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

xrandr is a command line interface to the X11 Resize, Rotate, and Reflect
(RandR) extension.

This release adds support for the new monitor objects added in RandR 1.5, and
fixes a few bugs.


Aaron Plattner (2):
  Split verbose mode printing into a helper function
  xrandr 1.5.0

Chris Wilson (3):
  Mark disabling an output as a change in its CRTC
  Mark all CRTC as currently unused for second picking CRTC pass
  Only use the current information when setting modes

Dave Airlie (2):
  xrandr: parse property returns correctly.
  xrandr: don't return NULL from a void

Keith Packard (3):
  Increase keystone.5c default window size
  keystone: Report matrix error. Deal with "primary" in xrandr output
  Add monitor support (v2)

git tag: xrandr-1.5.0

http://xorg.freedesktop.org/archive/individual/app/xrandr-1.5.0.tar.bz2
MD5:  ebffac98021b8f1dc71da0c1918e9b57  xrandr-1.5.0.tar.bz2
SHA1: f402b2ed85817c2e111afafd6f5d0657328be2fa  xrandr-1.5.0.tar.bz2
SHA256: c1cfd4e1d4d708c031d60801e527abc9b6d34b85f2ffa2cadd21f75ff38151cd  
xrandr-1.5.0.tar.bz2
PGP:  
http://xorg.freedesktop.org/archive/individual/app/xrandr-1.5.0.tar.bz2.sig

http://xorg.freedesktop.org/archive/individual/app/xrandr-1.5.0.tar.gz
MD5:  fe9cf76033fe5d973131eac67b6a3118  xrandr-1.5.0.tar.gz
SHA1: 9c55c9e9d5578f35d577b986278ad8b2d4405d06  xrandr-1.5.0.tar.gz
SHA256: ddfe8e7866149c24ccce8e6aaa0623218ae19130c2859cadcaa4228d8bb4a46d  
xrandr-1.5.0.tar.gz
PGP:  http://xorg.freedesktop.org/archive/individual/app/xrandr-1.5.0.tar.gz.sig
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWzKZwAAoJEBvvPYQBpohhhfMP/3HHl7TN1vzAzOhs0NQ0F8PN
YZ5sBe3Tr6u02tgh19qvWrFEMSxLHEEMwlV3BpCDqIbIg4IjhfhgYyzKR2jL16Kl
yd6xSTxm6NA6RmuoMq8EJks6FQ3SVWquqaEQLfY/ww16PGwmM6cqWyQNJG18zS9I
mk/x3jE3UEIidFh8ZwHn3+Q7HtD602LG4aGygWK36OdTIb4DncQbAUAtt5H2g+rQ
UC+XMTHAuZ0chTY2JkkDYxB2Mh8K8AM4mxEIn1l1tQ9jzXbubbWppkO7qE7FAExE
hUnxFy/e3KOzmJtDdFAY4EoQTt1pCkPjU9/h2OcIBhxke0VgfVd1yH5E22Y3WIuw
3oJBN5Zv/ox7fKjvC0jVz/xVuoGJgQlCuUA9zenjQDEBK/68ldatuWddb6auBSA8
KeTiFifMeN7OKagbkJrM2ymCPPEzK70RB2rZa7AyMT++Pl/O/6s+A8AS8ca7SyDt
f2lcolASZ8ynB6mAnSnK5JZM/zbfL8TvdJuYR81H7EYJ6qeBRIRzLzNmFxh8J8Hh
VPoNoxlV32ryRrlhse9RB3W2d0Kpv892XDt1OQ1qL+bJWe7JnMZUuU5qmuoWpsOg
PbYFoASYacs/aomety6QW5m4CqkHc4pIhFBqig5gztgkFyL7nAzXfY840WGJDa9r
T9AzW9ABBmnGCNRuFhKs
=FZE5
-END PGP SIGNATURE-
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: xf86-video-dummy: resize to exact resolution

2016-06-15 Thread Aaron Plattner
On 06/14/2016 03:53 PM, Erik Jensen wrote:
> When using Xorg with the dummy driver to host a virtual session for
> remote access, it is often desirable to resize the session to exactly
> match the resolution of the client. While the dummy driver supports
> switching between predefined resolutions, it provides no facility for
> switching to a resolution not defined in xorg.conf. The common
> workaround used by tools such as Xpra seems to be to define a long list
> of resolution in xorg.conf and hope the client matches or is close to
> one of them.
> 
> Chrome Remote Desktop is currently exploring support for using Xorg with
> the dummy driver instead of Xvfb, but exact resize would be needed for
> feature parity. As such, I would like to work to get this functionality
> included in the dummy driver. I am looking for feedback regarding the
> preferred approach for implementation (see below), and the next steps to
> move the process along.
> 
> I have found two proposed patches to add this functionality. The first
> is https://lists.x.org/archives/xorg-devel/2014-November/044580.html,
> which keeps the current virtual monitor model, but updates RandR support
> to 1.2, allowing custom modes to be added on the fly using
> --newmode/--addmode. The second patch is
> https://lists.x.org/archives/xorg-devel/2015-January/045395.html, which
> introduces a more fundamental change: it does away with providing a fake
> output and fake monitor all together, instead caring only about the
> virtual resolution, which can be set using --fb.
> 
> The first patch works the same way as the RandR 1.2 support added to
> Xvfb, last year, and has the advantage of allowing existing software
> such as Xpra to continue to work as is until updated to the new
> functionality. It has the disadvantage of requiring one to calculate a
> mode for the desired resolution that meets the timing requirements of an
> imaginary monitor, create the new mode, add it to the virtual output,
> and finally switch to it. Additionally, the maximum resolution that can
> be specified is limited by VideoRam, which still can only be specified
> in the config file.
> 
> The second patch has the advantage of making dummy simpler and cleaner:
> it is no longer necessary to calculate modelines to work with an
> imaginary monitor (and indeed no virtual output is presented), and one
> only needs to make a single RandR call to update the virtual size. It
> also allocates the needed memory on the fly, so one can pick any size up
> to 32767x32767. The main disadvantage is that programs expecting to be
> able to switch between predefined sizes using RandR 1.0 would stop
> working. E.g., Xpra wouldn't be able to resize the display at all until
> it was updated. It is also different from the approach taken with Xvfb,
> which means that code wanting to support both would need to handle them
> differently. (On the other hand, aside from Chrome Remote Desktop and
> Xpra, I'm not sure how many tools are trying to do something like this.)
> 
> Which approach seems more likely to gain traction?

Without a champion for either one, neither. :)

I'd obviously prefer to make -dummy simpler by removing the fake
outputs, but I don't have time at the moment to push for review or look
into the crash reported in
https://lists.x.org/archives/xorg-devel/2015-September/047331.html

-- 
Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: Corrupted XImage retrieved from a Window area

2016-08-02 Thread Aaron Plattner

I think I found the problem and sent a patch:

https://lists.x.org/archives/xorg-devel/2016-August/050544.html

https://patchwork.freedesktop.org/patch/102574/


On 08/01/2016 07:44 AM, Fabien Lelaquais wrote:

Thansk again to you, Thomas, and Carsten.
I suspect I still was not clear enough. My problem is not the hidden 
pixels. I know they will be undefined (or I can use an XCopyArea and 
process the GraphicsExpose events).
My problem is that visible pixels (well defined, and in this case, 
white) are retrieved as black pixels.

To make things more clear, if you cannot reproduce my problem.
I have two top windows A and B.
B is smaller than A, and located on top of it.
A is entirely on my screen (and B obviously is as well).
Here what it looks like: (sorry guys, you'll need a monospaced font to 
see this properly)

++
| A  |
|  +-+   |
|  |  B  |   |
|  +-+   |
++
If I XGetImage() from A a rectangle that contains the area covered by 
B (but still inside A), what I expect is:

++
||
|  ###   |
|  ###   |
|  ###   |
++
Where the # signs indicate an undefined value: all the pixels covered 
by B. Makes sense.

But what I really get is:
++
||
|  # |
|  # |
|  # |
++
That is, more pixels are reported as undefined, to the right of the 
expected 'undefined' region (the one covered by B).
My experiments show that the more I translate the capture rectangle to 
the right (limited to the surface of A), the more undefined pixels I 
will get.
If the capture rectangle has its left edge at 0 (on the A left-hand 
border), the image is perfect. This makes no sense at all.
Unfortunately, I cannot rely on the BackingStore or other properties: 
I may not be the one that created the Window (I'm working on a library 
that sits on top of X).

And yes, I suspect a bug in the server but honestly I don't believe it.
I tried this today: I can work the problem around by creating a 
temporary Pixmap (the size of my capture area), XCopyArea the window A 
into it, then XGetImage on that Pixmap.
Then my image is fine, at the cost of an additional Pixmap 
(potentially large) and a GC (where I dropped the generation of 
GraphicsExpose events that I don't care about).

All my pixels are correct, except the covered ones, which is fair.
Thanks!
Fabien
-Original Message-
From: Carsten Haitzler [mailto:ras...@rasterman.com]
Sent: lundi 1 août 2016 00:50
To: Fabien Lelaquais 
Cc: x...@freedesktop.org
Subject: Re: Corrupted XImage retrieved from a Window area
On Sun, 31 Jul 2016 07:51:08 + Fabien Lelaquais 
> said:

> Thanks a lot for your answer.
> Unfortunately I may not be able to rely of the Composite extension
> (app would be deployed in environments I don't control).
>
> Regarding the XGetImage documentation, that I've read ten times:
> My drawable (mainWindow) is indeed a viewable window.
> It has no inferior, and an overlapping window on its center. The
> specified rectangle that I provide, which is the center part of the
> source window, is both fully visible on the screen and wholly
> contained in mainWindow. I have no X error. And that's why I'm calling for 
help.
you won't get an x error unless the region ends up being out of screen 
bounds.
if pixels of a window are clipped by the screen, a parent window, a 
shape rectangle list, or are covered by another window... they "do not 
exist" by default in x11. that is how it works. that is why thomas 
suggested pixmap redirection to ensure that pixels DO exist and force 
them to live in a pixmap irrespective of windows overlapping or the 
window being offscreen. without this you are in regular old x11 mode 
and if your source is a window... if at the time you grab, you cannot 
SEE the pixels on a screen... they do not exist.
irrespective of if someone drew to them just before. you can never get 
them.
you can "deal with it" and get pixels by setting includeinferiors in 
your gc subwindowmode before you grab. this will ignore clipping of 
overlapping windows and just grab whatever is there in the framebuffer 
as long as your resulting rectangle at the time x performs the grab is 
still within screen limits. this will get you the content including 
overlapping window content. this is effectively how you do screenshots 
in x11.
if the region is within screen limits of course... if it is not - 
problems. if you are managed by a window manager there is always then 
a race condition where the wm may have moved you off screen but you 
have not seen the event yet. you can xgrabserver first, then get 
geometry of your window and translate relative to root to ensure you 
have the correct clipping coordinates, getimage, then xungrab server 
to work around the race (and please at least xflush or xsync after the 
xungrabserver.
note. i'm totally ignoring multiple visuals and depths here. :) if 
your screen 

Re: Getting help debugging X client crashes

2016-08-09 Thread Aaron Plattner
On 08/08/2016 03:01 AM, Michael Titke wrote:
> On 05/08/2016 20:17, Matt Lauria wrote:
>>
>> Can someone direct me where to get help tracking down a bug in X?
>>
>>  
>>
>> I’ve built a GUI (python3.4 using tkinter running gnome on RHEL 6.8)
>> which crashes daily with a:
>>
>>  
>>
>> X Error of failed request:  BadIDChoice (invalid resource ID chosen for this 
>> connection)
>>   Major opcode of failed request:  139 (RENDER)
>>   Minor opcode of failed request:  4 (RenderCreatePicture)
>>   Resource id in failed request:  0x4181254
>>   Serial number of failed request:  33776134
>>   Current serial number in output stream:  33776143
>>
>>  
>>
>> I’ve installed symbols/debuginfo and using gdb tried inspecting some
>> of the objects.
>>
>>  
>>
>> This seems similar to the bug at
>> https://bugzilla.mozilla.org/show_bug.cgi?id=458092 but when I looked
>> at xid.last/xid.max I didn’t see the same issue.
>>
>>  
>>
>>
>> Likely not -- the minor number corresponds to RenderCreatePicture,
>> not FreePicture.  Might have to dig into the X code to see what
>> generates BadID; owen was suggesting that it might be due due to IDs
>> getting out of sync somehow:
>>
>> 13:03 < otaylor> vlad_: Trying to create two resources with the ID of
>> the
>>  second less than the ID of the first would cause that
>>
>>
>> But I have no idea how we'd get into that situation, unless the IDs
>> wrapped around?
> Address the X powers to be: I hope the above inlined comment is just a
> bad guess and not the reality: it's very xlib centric to check for
> bigger than on the RID bits and an actual binary tree of resource IDs
> might jump around as hell (because it might produce bit reversed RIDs
> initially) without any possibility to reuse an already used ID.
> The core protocol standards are more terse on how to interpret RID
> bits and I hope one can still rely on that. Even XC-MISC would provide
> a free list which ...
> BadIDChoice for no good reason is no good choice then ...
The code for that is here:
https://cgit.freedesktop.org/xorg/xserver/tree/dix/resource.c#n1180

Bool
LegalNewID(XID id, ClientPtr client)
{
*void* *val;
*int* rc;

#ifdef PANORAMIX
XID minid, maxid;

*if* (!noPanoramiXExtension) {
minid = client->clientAsMask | (client->index ?
SERVER_BIT : SERVER_MINID);
maxid = (clientTable[client->index].fakeID | RESOURCE_ID_MASK) + 1;
*if* ((id >= minid) && (id <= maxid))
*return* TRUE;
}
#endif  /* PANORAMIX */
*if* (client->clientAsMask == (id & ~RESOURCE_ID_MASK)) {
rc = dixLookupResourceByClass(&val, id, RC_ANY, serverClient,
  DixGetAttrAccess);
*return* rc == BadValue;
}
*return* FALSE;
}

So the possibilities for BadIDChoice from the server here (aside from
some Xinerama weirdness) are 1) XID from the wrong client's range, or 2)
XID already in use. This definitely smells like a client-side issue.
~/hdd/git/x/xserver/dix/resource.c.html

> Regards,
> Michael
>

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: RandR 1.5 Monitors: "No monitor named '...'"

2016-09-14 Thread Aaron Plattner
On 09/11/2016 08:29 PM, Nathan Schulte wrote:
> I'm using X.org w/ Debian Sid:
> 
>> nmschulte@desmas-l:~$ Xorg -version
>>
>> X.Org X Server 1.18.4
>> Release Date: 2016-07-19
>> X Protocol Version 11, Revision 0
>> Build Operating System: Linux 3.16.0-4-amd64 x86_64 Debian
>> Current Operating System: Linux desmas-l 4.7.0-1-amd64 #1 SMP Debian
>> 4.7.2-1 (2016-08-28) x86_64
>> Kernel command line: BOOT_IMAGE=/vmlinuz-4.7.0-1-amd64
>> root=UUID=f5ba8b5c-63aa-4a67-a07c-dd8d3297b2d3 ro quiet
>> i915.enable_dp_mst=0
>> Build Date: 06 September 2016  01:32:44PM
>> xorg-server 2:1.18.4-2 (https://www.debian.org/support)
>> Current version of pixman: 0.33.6
>> Before reporting problems, check http://wiki.x.org
>> to make sure that you have the latest version.
> 
> I'm playing around with the new Monitors support which came with RandR
> 1.5 support.  Thanks for this awesome kit; it's extremely useful, and
> awesome that there's full-stack support for this virtualization concept
> finally.
> 
> Anyway, it seems I've been able to make RandR very confused; I cannot
> delete a monitor which xrandr tells me exists:
> 
>> nmschulte@desmas-l:~$ xrandr --listmonitors
>> Monitors: 2
>>  0: +*eDP1 1920/340x1080/190+0+0  eDP1
>>  1: dp2_0 0/0x0/0+0+0
>> nmschulte@desmas-l:~$ xrandr --delmonitor dp2_0
>> No monitor named 'dp2_0'
>> nmschulte@desmas-l:~$ xrandr --setmonitor dp2_0 auto eDP1
>> output list eDP1
>> add monitor eDP1
>> output name eDP1
>> X Error of failed request:  BadValue (integer parameter out of range
>> for operation)
>>   Major opcode of failed request:  140 (RANDR)
>>   Minor opcode of failed request:  43 ()
>>   Value in failed request:  0x2c0
>>   Serial number of failed request:  43
>>   Current serial number in output stream:  44
>> nmschulte@desmas-l:~$ xrandr --listmonitors
>> Monitors: 2
>>  0: +*eDP1 1920/340x1080/190+0+0  eDP1
>>  1: dp2_0 0/0x0/0+0+0
> 
> Below is my setup; this is on a laptop with Intel Haswell / Intel HD
> 4600 graphics, as well as an AMD Radeon HD 8970M gpu (no heads; render
> offload hybrid graphics setup).
> 
>> nmschulte@desmas-l:~$ xrandr
>> Screen 0: minimum 8 x 8, current 1920 x 1080, maximum 32767 x 32767
>> eDP1 connected 1920x1080+0+0 (normal left inverted right x axis y
>> axis) 340mm x 190mm
>>1920x1080 60.00*+  59.9350.00
>>1680x1050 59.9559.88
>>1600x1024 60.17
>>1400x1050 59.98
>>1600x900  60.00
>>1280x1024 60.02
>>1440x900  59.89
>>1280x960  60.00
>>1368x768  60.00
>>1360x768  59.8059.96
>>1152x864  60.00
>>1280x720  60.00
>>1024x768  60.00
>>1024x576  60.00
>>960x540   60.00
>>800x600   60.3256.25
>>864x486   60.00
>>640x480   59.94
>>720x405   60.00
>>640x360   60.00
>> DP1 disconnected (normal left inverted right x axis y axis)
>> DP2 disconnected (normal left inverted right x axis y axis)
>> HDMI1 disconnected (normal left inverted right x axis y axis)
>> HDMI2 disconnected (normal left inverted right x axis y axis)
>> HDMI3 disconnected (normal left inverted right x axis y axis)
>> VIRTUAL1 disconnected (normal left inverted right x axis y axis)
> 
> Also, can anyone explain the VIRTUAL1 output?  I wonder why it exists,
> what purpose it serves.
> 
> As well, this laptop "only" has three external output connectors; on
> mini DisplayPort, one DisplayPort, and one HDMI.  Do the two other HDMI
> outputs show in the list because the DisplayPort connectors are
> dual-mode DisplayPort / DisplayPort++?  If so, is there any part of the
> stack that can communicate this (to user-space)?  I don't believe it's
> possible to use the HDMI and DP outputs at the same time for a single
> DP++ port, so it's somewhat confusing that they're listed in the list,
> if what I'm suggesting above is the case.

There is supposed to be a "ConnectorNumber" property that you can use to
correlate these RandR outputs with physical connectors.

https://cgit.freedesktop.org/xorg/proto/randrproto/tree/randrproto.txt?id=cf3272717e08325f69bdbb759ab35cb4d1839fb7#n1931

E.g., on my system, these two are halves of the same mini-DisplayPort
connector:

DP-0 connected 1920x1200+0+960 (normal left inverted right x axis y
axis) 518mm x 324mm
_MUTTER_PRESENTATION_OUTPUT: 0
CscMatrix: 65536 0 0 0 0 65536 0 0 0 0 65536 0
EDID:
000010ac2ea055574a31
1b1201038034207891a3544c9926
0f5054a54b00714f8180a94001010101
010101010101283c80a070b023403020
36000644211a00ff00473237
3348383731314a57552000fc0044
454c4c20453234385746500a00fd
00384c1e5311000a20202020202b
BorderDimensions: 4
supported: 4
Border: 0 0 0 0
range: (0, 65535)
SignalFormat: TMDS
supported: TMDS
Connec

Re: RandR 1.5 Monitors: "No monitor named '...'"

2016-09-15 Thread Aaron Plattner

On 09/15/2016 12:56 PM, Nathan Schulte wrote:

Thanks for the reply, Aaron.  I assume then I've stumbled upon an issue
w/ this version of xrandr / X's RandR 1.5 impl?

On 09/14/2016 11:16 AM, Aaron Plattner wrote:

There is supposed to be a "ConnectorNumber" property that you can use to
correlate these RandR outputs with physical connectors.

https://cgit.freedesktop.org/xorg/proto/randrproto/tree/randrproto.txt?id=cf3272717e08325f69bdbb759ab35cb4d1839fb7#n1931


E.g., on my system, these two are halves of the same mini-DisplayPort
connector:

DP-4 disconnected (normal left inverted right x axis y axis)
CscMatrix: 65536 0 0 0 0 65536 0 0 0 0 65536 0
BorderDimensions: 4
supported: 4
Border: 0 0 0 0
range: (0, 65535)
SignalFormat: DisplayPort
supported: DisplayPort
ConnectorType: DisplayPort
ConnectorNumber: 1
_ConnectorLocation: 1


Aaron, what did you use to produce that output?  I see something similar
w/ the output of xrandr (note the "Clones" and "CRTC" and "CRTCs"
properties).  If I understand correctly, all of these outputs are from
the Intel Haswell HD 4600, and none from the AMD Radeon.


It was "xrandr --prop" with the NVIDIA driver since that's what I work 
on. --verbose should also print the properties so if you're not seeing 
them, then it sounds like the Intel driver isn't providing them.



$ xrandr --version
xrandr program version   1.5.0
Server reports RandR version 1.5



$ xrandr --verbose
Screen 0: minimum 8 x 8, current 3600 x 3840, maximum 32767 x 32767
eDP1 connected (normal left inverted right x axis y axis)
Identifier: 0x79
Timestamp:  56437148
Subpixel:   unknown
Clones:
CRTCs:  0 1 2
--- snip ---
DP1 disconnected (normal left inverted right x axis y axis)
Identifier: 0x7a
Timestamp:  56437148
Subpixel:   unknown
Clones: HDMI1
CRTCs:  0 1 2
--- snip ---
DP2 connected primary 1200x3840+0+0 (0x13b) left (normal left inverted
right x axis y axis) 580mm x 360mm
Identifier: 0x7b
Timestamp:  56437148
Subpixel:   unknown
Gamma:  1.0:1.0:1.0
Brightness: 1.0
Clones: HDMI3
CRTC:   0
CRTCs:  0 1 2
--- snip ---
HDMI1 connected 1200x1920+2400+0 (0x13a) left (normal left inverted
right x axis y axis) 580mm x 360mm
Identifier: 0x7c
Timestamp:  56437148
Subpixel:   unknown
Gamma:  1.0:1.0:1.0
Brightness: 1.0
Clones: DP1
CRTC:   1
CRTCs:  0 1 2
--- snip ---
HDMI2 connected 1200x1920+1200+0 (0x13a) left (normal left inverted
right x axis y axis) 580mm x 360mm
Identifier: 0x7d
Timestamp:  56437148
Subpixel:   unknown
Gamma:  1.0:1.0:1.0
Brightness: 1.0
Clones:
CRTC:   2
CRTCs:  0 1 2
--- snip ---
HDMI3 disconnected (normal left inverted right x axis y axis)
Identifier: 0x7e
Timestamp:  56437148
Subpixel:   unknown
Clones: DP2
CRTCs:  0 1 2
--- snip ---
VIRTUAL1 disconnected (normal left inverted right x axis y axis)
Identifier: 0x7f
Timestamp:  56437148
Subpixel:   no subpixels
Clones:
CRTCs:  3
--- snip ---


I'm still not sure how or why I would use this VIRTUAL1 output; I can't
help but wonder if it has to do w/ DRI Prime (hybrid graphics), but I
guess it's not and is [necessary, and subsequently] used internally in
the driver stack somehow else.

--
Nate


___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: XOpenDisplay call sequence

2016-11-29 Thread Aaron Plattner
On 11/28/2016 01:31 PM, Krzywicki, Alan wrote:
>
> So if I follow the XOpenDisplay sequence up the stack I see
> xcb_connect() / _/xcb_open/_abstract( ) trying to open
> “/tmp/.X11-unix/X0” with protocol set to 0.   On one system it
> eventually calls select(), on another it uses poll() instead, so it is
> looking for a response.  Once in a while it takes over a minute to get
> a response.   Can anyone describe a general overview of what it is
> trying to open?  Any idea why the select/poll call hangs so long?
>
>  
>
> Stack example:
>
>  
>
> #0  0xe424 in __kernel_vsyscall ()
> #1  0xb6ad657d in select () from /lib/i686/libc.so.6
> #2  0xb68997fc in ?? () from /usr/lib/libxcb.so.1
> #3  0xb68985db in xcb_connect_to_fd () from /usr/lib/libxcb.so.1
> #4  0xb689b6cd in xcb_connect () from /usr/lib/libxcb.so.1
> #5  0xb7004b9b in _XConnectXCB () from /usr/lib/libX11.so.6
> #6  0xb6fe5f10 in XOpenDisplay () from /usr/lib/libX11.so.6
> #7  0x0805f463 in main ()
>
>  
>
My guess is that the missing stack frames are

#2.0 read_block()
#2.1 _xcb_in_read_block()
#2.2 read_setup()

If that's where it's blocked, then it's waiting for the server to send
the setup block. Either the server is busy processing another client's
request, or some other client grabbed the server and hasn't ungrabbed it
yet. You'll have to debug the server to see whether it's stuck too or if
/ why it chose not to send the connection block.
>
>  
>
> / Alan K.
>
> ---
> This communication contains confidential information. If you are not
> the intended recipient please return this email to the sender and
> delete it from your records.
>
> Diese Nachricht enthaelt vertrauliche Informationen. Sollten Sie nicht
> der beabsichtigte Empfaenger dieser E-mail sein, senden Sie bitte
> diese an den Absender zurueck und loeschen Sie die E-mail aus Ihrem
> System.
>
>
> ___
> xorg@lists.x.org: X.Org support
> Archives: http://lists.freedesktop.org/archives/xorg
> Info: https://lists.x.org/mailman/listinfo/xorg
> Your subscription address: %(user_address)s


___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: RandR 1.5 "Monitors" and splitting a single physical display

2017-03-15 Thread Aaron Plattner
On 03/13/2017 11:32 AM, Nathan Schulte wrote:
> On 03/07/2017 05:56 PM, Jack Coulter wrote:
>> Is it possible for there to be multiple monitors for a single output, at
>> least as far as the RandR protocol is concerned, and is support simply
>> needed in xrandr, or is what I'm trying to do simply not possible?
> 
> I wonder the same; I'm looking to split an Output as two Monitors, and
> then rotate one of the Monitors.  I am using an active splitter, like
> Jack is w/ the Matrox Dual Head 2 Go devices (I'm using a Sunix DPD2001,
> with DisplayPort Multi-Stream Transport disabled).

You're not going to be able to rotate one monitor with the existing protocol. 
Rotation happens at the crtc, not the output or monitor.

> I ran into issues trying to do setup multiple Monitors for an Output,
> and sent a mail to the list in September 2016; Aaron Plattner from
> NVIDIA responded, noting about the "ConnectionNumber" property from the
> xrandr --prop command.  The Intel hardware I'm using (Intel Skylake;
> Iris Pro Graphics P580 [8086:193d] (rev 09).
> 
> The thread can be viewed here:
> 
> https://lists.x.org/archives/xorg/2016-September/058245.html
> 
> I had to give up at the time and still haven't had a chance to poke this
> again.  I have a feeling this isn't supported, but I couldn't find
> anything in the protocol/extension specifications stating so.  In fact,
> the specs imply that it _is_ indeed possible.  Perhaps only certain
> drivers provide this level of support?
> 
> Jack, let me know if you figure this out, please and thank you!

From the protocol, it sounds like it makes a distinction between monitors with 
outputs in them and monitors without.

Output-ful monitors have their geometry set automatically:

If 'info.outputs' is non-empty, and if x, y, width, height are all
zero, then the Monitor geometry will be dynamically defined to
be the bounding box of the geometry of the active CRTCs
associated with them.

But you can create a monitor with zero outputs and set its geometry manually.

The auto-delete behavior sounds carefully worded to only apply when removing an 
output from a output-ful monitor, and leaves monitors that never had outputs 
alone:

For each output in 'info.outputs, each one is removed from all
pre-existing Monitors. If removing the output causes the list of
outputs for that Monitor to become empty, then that Monitor will
be deleted as if RRDeleteMonitor were called.

So for this, I would imagine that you would want to create two monitors, both 
with no outputs, that just happen to overlap the output you want to split.

-- Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: X running on target is picking drivers from wrong directory

2017-07-19 Thread Aaron Plattner
On 07/18/2017 11:49 PM, abhijit wrote:
> Hi,
> 
> I am trying to build xserver 1.17.1 for imx57 based board. But cross
> compiled executable is trying to pick libraries and drivers from NFS
> root file system path.
> 
> Following is the configuration command,
>   > ./configure --enable-maintainer-mode
>   --host=arm-xilinx-linux-gnueabi
>   --prefix=/media/VAYAVYA/freedreno_proj/xserver/installs

Try using typical settings for the directories and then installing with
DESTDIR. So I guess that would be, (stealing the options from Arch Linux):

./configure --host=arm-xilinx-linux-gnueabi \
--prefix=/usr \
--libexecdir=/usr/lib/xorg-server \
--sysconfdir=/etc \
--localstatedir=/var \
--with-xkb-path=/usr/share/X11/xkb \
--with-xkb-output=/var/lib/xkb \
--with-fontrootdir=/usr/share/fonts
DESTDIR=/media/VAYAVYA/freedreno_proj/xserver/installs make -j`nproc`
install

>   LIBS="-lm -ldl" --with-log-dir=/var/log/
> 
> Following is snippet of /var/log/Xorg.0.log from target,
>   /media/VAYAVYA/freedreno_proj/xserver/installs/lib/xorg/protocol.txt
> 
> (==) ModulePath set to " /media/VAYAVYA/freedreno_proj/xserver/installs/"
> 
> (EE) Failed to load module "glx" (module does not exist, 0)
> (II) LoadModule: "dri"
> (WW) Warning, couldn't open module dri
> (II) UnloadModule: "dri"
> (EE) Failed to load module "dri" (module does not exist, 0)
> (II) LoadModule: "dri2"
> 
> Can some one help me what are proper configuration options.
> 
> I tried following, but it xserver is trying to install in
> /usr/lib/xorg/modules, which is system root directory.
> ./configure --enable-maintainer-mode --host=arm-xilinx-linux-gnueabi
> --prefix=/media/VAYAVYA/freedreno_proj/xserver/installs LIBS=-lm -ldl
> --with-log-dir=/var/log/ --with-module-dir=/usr/lib/xorg/modules
> --with-dri-driver-path=/usr/lib/dri
> 
> Thanks,
>Abhijit
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: xorg.0.log repetitive entries

2017-10-20 Thread Aaron Plattner

On 10/20/2017 09:30 AM, Adam Jackson wrote:

On Thu, 2017-10-19 at 20:29 -0300, sawb...@gmx.net wrote:


For example, these lines accounting for the mode setting of each one
of my three monitors:

(II) NVIDIA(0): Setting mode "CRT-0:1280x1024"
(II) NVIDIA(1): Setting mode "DFP-0:1280x1024"
(II) NVIDIA(2): Setting mode "CRT-1:1280x1024"


I'm pretty sure these messages are printed in response to configuration
commands issued by the desktop session itself. So if they're being
printed repeatedly, it's because your DE is doing repeated work.


Either that, or the server is regenerating.

What login manager & desktop environment are you using, and is it 
possible that the session startup is running a few short-lived X clients 
before starting one that will hold the server open?


-- Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: xorg.conf Settings Being Overridden By nvidia-auto-select

2018-01-31 Thread Aaron Plattner

Hi Junk,

This indicates that some X client (probably your desktop environment) is 
overriding the mode set in xorg.conf with its own.


-- Aaron

On 01/31/2018 12:52 PM, Junk Moody wrote:
I'm having trouble with nvidia-auto-select apparently overriding my 
settings in xorg.conf.


I used nvidia-settings to set my resolution to 1920x1080 and saved the 
file successfully to /etc/X11/xorg.conf.  I confirmed that the desired 
resolution is in that file, but upon reboot it isn't getting used.  I 
reviewed

/var/log/Xorg.0.log which claims it is using my xorg.conf:
     [13.773] (==) Using config file: "/etc/X11/xorg.conf"
and even claims to be using my resolution specified in that config file
     [14.000] (**) NVIDIA(0): Option "MetaModes" "1920x1080 +0+0 
{viewportout=1850x1040+35+19}"

     ...
     [14.602] (II) NVIDIA(0): Validated MetaModes:
     [14.602] (II) NVIDIA(0): 
"1920x1080+0+0{viewportout=1850x1040+35+19}"
     [14.602] (II) NVIDIA(0): Virtual screen size determined to be 1850 
x 1040

     ...
     [14.624] (II) NVIDIA(0): Setting mode 
"1920x1080+0+0{viewportout=1850x1040+35+19}"

Yet, further down in the log I see
[16.472] (II) NVIDIA(0): Setting mode "HDMI-0: nvidia-auto-select 
@3770x2120 +0+0 {ViewPortIn=3770x2120, ViewPortOut=3770x2120+35+19}"

which I assume is overriding my xorg.conf setting.

I checked all the following locations and only found /etc/X11/xorg.conf 
which is the one I created via nvidia-settings:

    /etc/X11/
    /usr/etc/X11/
    /etc/X11/$XORGCONFIG
    /usr/etc/X11/$XORGCONFIG
    /etc/X11/xorg.conf   # found
    /etc/xorg.conf
    /usr/etc/X11/xorg.conf.
    /usr/etc/X11/xorg.conf
    /usr/lib/X11/xorg.conf.
    /usr/lib/X11/xorg.conf

How do I determine where the override is coming from?

I'm running Ubuntu 16.04.3, nVidia 384.111, kernel  4.4. Xorg.0.log is 
attached.



--
Jerry


___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: Xorg VRAM leak because of Qt/OpenGL Application

2018-07-02 Thread Aaron Plattner
On 07/02/2018 09:22 AM, Dennis Clarke wrote:
> On 07/01/2018 10:11 PM, Mathieu Westphal wrote:
>> Hello list,
>>
>> I am working on a complex Qt/OpenGL Application.
>> Xorg starts leaking in VRAM when i'm using the application and never
>> release the memory, until I restart X of course.
>>
>> $ nvidia-smi
> 
> I think you are looking at output from an nvidia tool and not memory
> for the system and processes as a whole.
> 
>> The version of Xorg does not matter, tested a few.
>> The version of the driver does not matter, as long as it's nvidia,
>> tested 340, 384, 390
> 
> Using 384.98 here.  Very stable.
> 
> However I think you are looking at output from nvidia-smi here and not
> actual process data from the /proc/$PID/stat where $PID is the pid of
> your X process.

Based on Mathieu's email subject, it sounds like he's interested in how
much GPU memory Xorg is using. The process data in /proc does not
include GPU memory.

Mathieu, when you say memory is leaked, do you mean that the memory
usage increases each time you run myOpenGLQtBasedApp, or does it
increase from 50 MB to 110 MB and then stay there even if you run the
app again?

You can diagnose which clients are causing the server to allocate
resources by running tools such as xrestop, xwininfo -tree -root, and
xlsclients before and after running your app each time.

If you're still having trouble, you can email linux-b...@nvidia.com and
we can try to help you out there.

-- Aaron

> For example :
> 
> sed$ ps -ef |  grep "bin\/X"
> root  2488  2429  3 Jun15 tty1 13:32:18 /usr/bin/X :0
> -background none -noreset -audit 4 -verbose -auth
> /run/gdm/auth-for-gdm-TVlXTy/database -seat seat0 -nolisten tcp vt1
> 
> sed$ cat /proc/2488/stat
> 2488 (X) S 2429 2488 2488 1025 2488 4202752 15866906 3576 170 0 407
> 762600 6 3 20 0 2 0 4079 569253888 46342 18446744073709551615 1 1 0 0 0
> 0 0 3149824 1098933967 18446744073709551615 0 0 17 1 0 0 1192 0 0 0 0 0
> 0 0 0 0 0
> sed$
> 
> The actual rss ( Resident Set Size ) is what you should have a look at.
> According to PROC(5) you can get that from "stat" under /proc for a
> given pid. That is field 24 here :
> 
> sed$ cat /proc/2488/stat | awk '{ print $24 }'
> 46342
> 
> These are pages of memory and that reports :
> 
>     Resident Set Size: number of pages the process has in real memory.
>     This is just the pages which count toward text, data, or stack
>     space.  This does not include pages  which  have  not  been
>     demand-loaded in, or which are swapped out.
> 
> 
> Your page size may be 8192 bytes or 4096 bytes or something else:
> 
> sed$ getconf -a | grep "PAGE"
> PAGESIZE   4096
> PAGE_SIZE  4096
> _AVPHYS_PAGES  95456
> _PHYS_PAGES    8187584
> 
> nix$ getconf -a | grep "PAGE"
> PAGESIZE   65536
> PAGE_SIZE  65536
> _AVPHYS_PAGES  89121
> _PHYS_PAGES    95356
> 
> 
> So while the nvidia-smi tool may seem to tell you that a process needs
> more memory in the GPU it isn't telling you much about the process
> running on your system.
> 
> sed$ nvidia-smi -q -d POWER,TEMPERATURE,PIDS
> 
> ==NVSMI LOG==
> 
> Timestamp   : Mon Jul  2 17:17:10 2018
> Driver Version  : 384.98
> 
> Attached GPUs   : 1
> GPU :86:00.0
>     Temperature
>     GPU Current Temp    : 40 C
>     GPU Shutdown Temp   : 102 C
>     GPU Slowdown Temp   : 97 C
>     GPU Max Operating Temp  : 80 C
>     Memory Current Temp : N/A
>     Memory Max Operating Temp   : N/A
>     Power Readings
>     Power Management    : Supported
>     Power Draw  : 16.28 W
>     Power Limit : 110.00 W
>     Default Power Limit : 110.00 W
>     Enforced Power Limit    : 110.00 W
>     Min Power Limit : 100.00 W
>     Max Power Limit : 130.00 W
>     Power Samples
>     Duration    : N/A
>     Number of Samples   : N/A
>     Max : N/A
>     Min : N/A
>     Avg : N/A
>     Processes
>     Process ID  : 2488
>     Type    : G
>     Name    : /usr/bin/X
>     Used GPU Memory : 218 MiB
>     Process ID  : 13211
>     Type    : G
>     Name    : /opt/firefox/firefox
>     Used GPU Memory : 21 MiB
>     Process ID  : 32110
>     Type    : G
>     Name    : /opt/firefox/firefox
>     Used GPU Memory : 21 MiB
>     Process ID  : 32668
>  

Re: Xorg.O.log (WW) message

2018-09-25 Thread Aaron Plattner

On 9/24/18 3:08 PM, sawb...@gmx.net wrote:

Hello:

I have this message in my Xorg.O.log:

[code]
[34.298] (WW) Unresolved symbol: fbGetGCPrivateKey
[/code]

I'm running Devuan ASCII with two NVidia caards for three monitors and using 
proprietary
drivers.


What version of the X server are you using? If I'm reading things 
correctly, this implies that you're using xserver 1.9 or older, but your 
server doesn't export the fbGetGCPrivateKey() function for some reason.


This function is only used on older GPUs when workstation overlays are 
enabled, so if you aren't enabling those I think it should be harmless.


-- Aaron


I have searched all over the web and have found a great deal on posts/instances 
in which
this same error is cited (as an entry in a Xorg.0.log file) but have not been 
able to find out
why it is there and if has some significance.

After all, it is labelled as a warning ie: (WW).

It is (apparently) innocuous as my NVidia cards are working properly (save for 
an artifacts
issue reserved for another post).

I'd appreciate if someone could give me some insight on this.

Thanks in advance.

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: xorg disable GPU acceleration

2018-10-22 Thread Aaron Plattner
On 10/14/18 11:37 PM, Łukasz Maśko wrote:
> Dnia sobota, 13 października 2018 09:22:16 Иван Талалаев pisze:
> [...]
>> Do you know how to disable xorg hardware acceleration?
> 
> If you don't need hardware acceleration, maybe you could use the generic VESA 
> driver instead of the NVidia one?
> 

There's also an option to use the nvidia driver for display but disable
acceleration:

https://download.nvidia.com/XFree86/Linux-x86_64/396.54/README/xconfigoptions.html#Accel
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: what does "+"(preferred) means in xrandr?

2018-11-26 Thread Aaron Plattner
"preferred" generally means that the Extended Display Information Data
(EDID) from the monitor indicates that that mode most closely matches
the native timings of the display. All modes listed by RandR for a given
monitor should work with that monitor, but the preferred mode should
work best (for some definition of "best").

RandR will send events when the list of modes for an output changes. To
receive those, you should use XRRSelectInput() to select for them. I
*think* the mask you want it RROutputChangeNotifyMask.

You can get more information about RandR resources and requests in
randrproto.txt:
https://gitlab.freedesktop.org/xorg/proto/randrproto/blob/master/randrproto.txt

On 11/24/18 8:20 AM, pengyixiang wrote:
> hello everyone!
>     What the “preferred" means in xrandr?  Is it setted by display
> hardware? Can we set it manual? If we call “XOpenDisplay” to open
> default screen and then poll the returned x11_fd, will we interrupted
> for “preferred" changed? Is there others docs about it? Looking forward
> to your reply.
> 
> 
> Cheers,
> Pencc
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: support question regarding X11 1.20.1

2019-06-24 Thread Aaron Plattner
On 6/19/19 2:10 PM, Greenamyer, Shaun wrote:
> Hey guys,
> 
> I hope this message isn’t an annoyance but I’m not sure the best place
> to turn right now.
> 
> I am having an issue with my X windows not getting “erased” when the
> window is closed. I’m probably not explaining well which is why my
> google searches have been empty. Basically I am running MWM as my window
> manager and opening an Xterm then exiting the xterm, after this the
> xterm window is still displayed but does not respond. I checked xwininfo
> and I see no reference to the window that I closed so I feel like the
> window was destroyed.   Another interesting observation is that if I
> take another good window and drag it over the bad window, the good
> window will slide under the bad window. Its very strange.  Lastly I have
> the NVIDIA driver installed (430.26) and this issue only happens when
> the COMPOSITE extension is disabled.

Are you using a Red Hat build of the X server, and do you have
workstation overlays enabled, by any chance?

The symptoms sound like this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1683853

-- Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: Can X11 return an image with an NVIDIA GPU memory address?

2019-08-12 Thread Aaron Plattner

On 8/3/19 10:09 AM, Suhail Doshi wrote:

Hi there,

My goal is to try get the frame of a desktop to do low-latency remote 
desktop. I am interested in using X11 as the window manager.


When I run nvidia-smi, I noticed that X11 is a process that interacts 
the GPU:

+-+
| Processes:                                                       GPU 
Memory |
|  GPU       PID   Type   Process name                             Usage 
      |

|=|
|    0      3255      G   /usr/lib/xorg/Xorg   
  57MiB |
|    0      3286      G   /usr/bin/gnome-shell 
  81MiB |

+-+

My question is: Is there a way to get an image of the desktop that 
returns a pointer that's GPU memory? For example, in Windows 10, there's 
an API called the Desktop Duplication API which will allows you to do 
this. Then, it lets you copy the data in that GPU memory block to 
another one such that you can encode the frame with H264, for example. I 
am using NVIDIA GPUs and utilizing their NVENC SDK.


The video memory used for the desktop is not directly accessible by the CPU.


I am looking for an equivalent.

Right now, it appears that XShmGetImage returns the frame of the desktop 
in host memory versus device memory which increases latency by 5-10ms. 
For our purposes, a reduction of 5ms is meaningful.


CPU-mapped device memory is likely much slower to access than you think, 
so XShmGetImage is probably much faster than the CPU-mapped approach 
you're thinking of.


What it sounds like you're looking for is the NVIDIA Capture SDK, which 
provides an API that applications can use to capture frames into video 
memory and do on-GPU encoding to H.264 before streaming to system memory.


You can find more information at
https://developer.nvidia.com/capture-sdk

Sincerely,
Aaron

Should I use kernel-level APIs like interacting with Direct Rendering 
Manager and Kernel Mode Setting to accomplish this instead?


Just looking for options. Thanks!

Suhail

--

Founder

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: Dummy video driver plea

2020-05-11 Thread Aaron Plattner

On 5/11/20 5:33 AM, Mgr. Janusz Chmiel wrote:

Please, if I will only use dummi video driver and X11vnc. Does making
setting monitor frequency to 80 Hertz some sense? Or it will not increase
The app responsiveness performance at all?

Because VNC protocol is being used?


The only parts of the mode information that are relevant to the dummy 
driver are the width and the height. Everything else is ignored.


-- Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


Re: How to switch to a particular resolution? Xorg complains "No valid modes for "DFP-1:2560x1600"; removing." and ignores my setting.

2020-09-23 Thread Aaron Plattner

On 9/23/20 1:35 PM, Yuri wrote:

On 2020-09-23 08:21, Pete Wright wrote:
$ xrandr |grep 2560 



Interestingly, this command shows nothing after Xorg was started, but 
shows 2560x1600 after the "NVidia settings" program changes the 
resolution to 2560x1600.


I just need to find a way, if any, to do this automatically.

I think we need a better picture of what's actually connected to your 
system and what it's reporting itself as capable of. The full output of 
xrandr would help, and it would also help to add


Option "ModeDebug"

to the "Device" or "Screen" section of /etc/X11/xorg.conf, restart the X 
server, and the attach /var/log/Xorg.0.log.


-- Aaron

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


Re: How to switch to a particular resolution? Xorg complains "No valid modes for "DFP-1:2560x1600"; removing." and ignores my setting.

2020-09-24 Thread Aaron Plattner

On 9/23/20 11:31 PM, Yuri wrote:

But how does it work when the NVidia utility switches it to this mode then?


Does the output look blurry or fuzzy when that happens? I wonder if 
you're getting a 2560x1600 desktop scaled down to 1920x1080.


You should be able to tell for sure by looking at the output of "xrandr 
--verbose" and "nvidia-settings -q CurrentMetaMode" after setting that 
configuration.



I think Adam's analysis is right: your GPU and/or cable can't support 
the full 3840x21600 native mode of this display, so it falls back to the 
highest mode the monitor says it can support that fits within the 
available bandwidth.


-- Aaron


I use the HDMI cable.

Yuri

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


Re: Feature request, but must be universallly accepted by ALL blanker authors

2020-10-02 Thread Aaron Plattner

On 10/2/20 1:17 PM, Dan Arena wrote:

Gene,

Following up more about xfce4, you should be able to go into their
Settings Manager and you can turn off the Screensaver and uncheck
Power Management under the Advanced tab. You will still want to add
the lines I mentioned before into /etc/X11/xorg.conf to prevent the
screen from going blank though.

I also think it would be a useful feature for LinuxCNC to include an
option where it itself can prevent screensavers. This would not be too
hard for them to do, see https://stackoverflow.com/a/31504731/1941627

A friend also just brought up a good point... do these machines not
have a physical emergency stop button? It seems like with them being
as dangerous as you say they are, they should. I know the couple mills
I have seen do.


I agree with this. There are so many things that could go wrong between 
the keyboard and software commanding the machine to stop (swapping 
prevents something from getting scheduled, interrupt storm from a rogue 
device delays processing, etc.). If this is really a safety critical 
feature then everything in the system along this path needs to have 
realtime guarantees, redundancy, and follow something like MISRA coding 
standards.


It would be *way* easier and cheaper to put a big red stop button on the 
machine itself, and bypass this problem of screen locking programs entirely.


I would not recommend relying on the computer for this.

-- Aaron


I would also take this issue up with the LinuxCNC community. Is it
supposed to work like that? Does a new install from the LinuxCNC
"Install DVD" behave the same?

Thanks,
Dan

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


Re: xorg.conf question

2021-03-04 Thread Aaron Plattner
These are NVIDIA-specific options so they're kinda offtopic here, but 
I'll try to address them below.


You might want to consider posting on the NVIDIA forum 
 [1] 
or by emailing linux-b...@nvidia.com 


[1] https://forums.developer.nvidia.com/c/gpu-unix-graphics/linux/148

On 3/4/21 9:36 AM, Greene, Paul J. wrote:


Hello,

First post on this list – don’t be too hard on me. J

I support a bunch of software developers that use a KVM to switch back 
and forth between a Windows workstation and a CentOS 7.9 workstation. 
The Windows side has dual monitors, both going through the KVM, and 
the Linux side has 3 monitors – two monitors going through the KVM and 
the 3^rd monitor connected directly to the PC.


In some cases, when they switch back and forth between the Windows PC 
and the Linux PC, the Linux PC loses its video resolution or 1 or more 
screens goes black. I’m assuming the video loses its sync with the 
monitor. To get out of this state, the user usually does a 
CTRL-ALT-BACKSPACE to restart X, or they go to CTRL-ALT-F2, login from 
the command prompt, and type “startx”. In both cases they lose any 
unsaved work they’ve got open.


The video card (in most cases) is an NVIDIA 620 with the NVIDIA driver 
installed.


I’ve tried adding the following 4 lines to xorg.conf in the device 
section, and it seems to make only the left most monitor stable, but 
the other 2 monitors appear to be disabled, with black screens.


Option "ConnectedMonitor" "DFP-0"

Option "CustomEDID" "DFP-0:/etc/X11/edid.bin"

Option "IgnoreEDID" "false"

Option "UseEDID" "true"

If there's only one GPU in the system then you only need one Device 
section. It's likely that the other two device sections are ignored. 
("Device" here refers to a GPU, not a physical display device).


The "ConnectedMonitor 
" 
option takes a comma-separated list of display devices that the driver 
should always consider connected. "CustomEDID 
" 
uses a semicolon-separated list. So in your case you probably want this:


Option "ConnectedMonitor" "DP-0, DP-2, DP-6"
Option "CustomEDID" "DP-0:/path/to/edid0.bin; DP-2:/path/to/edid1.bin; 
DP-6:/path/to/edid2.bin"


These options are documented in the README: 
https://download.nvidia.com/XFree86/Linux-x86_64/460.56/README/xconfigoptions.html


You shouldn't need the IgnoreEDID or UseEDID options.

The system sees the 3 monitors as DP-2, DP-0, and DP-6 (respectively, 
from left to right). The NVIDIA driver includes a GUI configuration 
app that lets you generate the EDID files on each of the monitors, so 
I created an edid.bin file for each monitor, and adjusted the file 
path for each one in the 2^nd line.


There’s 3 device sections so I put the 4 lines above into each section 
(adjusting for edid.bin path and DP-x reference appropriately). That 
gave me one useable screen (the left one) – the middle and right 
monitor were black screens.


I tried putting the 4 lines all in one “Device” section, with the 
appropriate DP-x and edid.bin path, (total of 8 lines) and got the 
same result.



Options in xorg.conf don't combine when you have more than one of the 
same option in a section.


-- Aaron


The PCs are Dell Optiplex 9020s, if that’s relevant.

Any suggestions? Am I on the right track here or should I be trying 
something else?


PG


This message is intended only for the use of the individual or entity 
to which it is addressed and may contain ZETA Associates confidential 
or proprietary information. If you are not the intended recipient, any 
use, dissemination, or distribution of this communication is 
prohibited. If you have received this communication in error, please 
notify the sender and delete all copies.


___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


Re: How to force Xorg/Firefox to use iGPU instead of discrete nvidia card on a Clevo laptop?

2021-03-24 Thread Aaron Plattner
Mesa lumps all of its drivers together under one vendor named "mesa" so 
you are supposed to be able to do something like this:


DRI_PRIME=1 __GLX_VENDOR_LIBRARY_NAME=mesa glxinfo | grep vendor

Unfortunately, Mesa doesn't like something about the GLX fbconfigs the 
NVIDIA driver provides, so it doesn't actually work:


libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
X Error of failed request:  GLXBadContext
  Major opcode of failed request:  152 (GLX)
  Minor opcode of failed request:  6 (X_GLXIsDirect)
  Serial number of failed request:  62
  Current serial number in output stream:  61

-- Aaron

On 3/24/21 6:44 AM, Dan wrote:

On Wednesday, March 24, 2021 2:33 AM, Sérgio Basto  wrote:


We usually have the opposite problem. we want to enable discrete nvidia card by 
default, please try reverse these options [1]

[1]
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Frpmfusion.org%2FHowto%2FOptimus%23Finer-Grained_Control_of_GLX_.2B-_OpenGL&data=04%7C01%7Caplattner%40nvidia.com%7Cc5eb3f42dc9547fdba5208d8eecc5ff0%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C1%7C637521908842807820%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=a0HaIV6LuWMatH2Vqc%2FWAYYekoKZjqXdVtYjvBn8DGE%3D&reserved=0

I see. I tried the following:

~$ __NV_PRIME_RENDER_OFFLOAD=0 __GLX_VENDOR_LIBRARY_NAME=i965 glxinfo|grep 
vendor
server glx vendor string: NVIDIA Corporation
client glx vendor string: NVIDIA Corporation
OpenGL vendor string: NVIDIA Corporation

~$ __NV_PRIME_RENDER_OFFLOAD_PROVIDER=modesetting 
__GLX_VENDOR_LIBRARY_NAME=i965 glxinfo |grep vendor
server glx vendor string: NVIDIA Corporation
client glx vendor string: NVIDIA Corporation
OpenGL vendor string: NVIDIA Corporation

~$ __NV_PRIME_RENDER_OFFLOAD_PROVIDER=modesetting 
__GLX_VENDOR_LIBRARY_NAME=i965 vainfo
libva info: VA-API version 1.8.0
libva info: Trying to open /usr/xorg/lib64/dri/nvidia_drv_video.so
libva info: va_openDriver() returns -1
vaInitialize failed with error code -1 (unknown libva error),exit

No matter what I try, it always defaults to nvidia.

Maybe I'm doing something wrong?

Thanks!


___
xorg@lists.x.org: X.Org support
Archives: 
https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.freedesktop.org%2Farchives%2Fxorg&data=04%7C01%7Caplattner%40nvidia.com%7Cc5eb3f42dc9547fdba5208d8eecc5ff0%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C1%7C637521908842807820%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=k1VGnv%2BC18VQ0c9HooV28b3S6QMKbjCjkEuSq8wyn4Q%3D&reserved=0
Info: 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.x.org%2Fmailman%2Flistinfo%2Fxorg&data=04%7C01%7Caplattner%40nvidia.com%7Cc5eb3f42dc9547fdba5208d8eecc5ff0%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C1%7C637521908842807820%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=MxcxkXvt%2B%2B6PmS%2FEE3ddMP8xquOoEFRASfpWn0mFPrs%3D&reserved=0
Your subscription address: %(user_address)s

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


Re: XrandR Verify Active Rate

2021-03-29 Thread Aaron Plattner

On 3/27/21 4:35 AM, re.mcclue wrote:
A |XRRScreenResources| contains multiple |XRRModeInfo| and |RROutput| 
(which with |XRRGetOutputInfo()| effectively means |XRROutputInfo|). 
Therefore, a screen can have many outputs which in turn, can have many 
modes. I can restrict myself to only |(XRROutputInfo *)->connection == 
RR_Connected| outputs, however this still leaves many possible 
modes. The, rate is a property of the mode. Because of this 
many-to-many relationship, how can I determine what the active rate is?

In other words:


|XRRScreenResources *res = XRRGetScreenResources(display, 
default_root_window); for (int i = 0; i < res->nmode; ++i) { 
XRRModeInfo *mode_info = &res->modes[i]; // How do you verify this is 
the active rate? double rate = (double)mode_info->dotClock / 
((double)mode_info->hTotal * (double)mode_info->vTotal)); } |


An Output is active if there is a CRTC driving it. The active mode for a 
CRTC is returned in the RRGetCrtcInfo reply 
. 
This reply also tells you which outputs are currently being driven by 
that CRTC.


Note that even if an Output is connected (i.e. |(XRROutputInfo 
*)->connection == RR_Connected|) it's not necessarily actually turned 
on. Conversely, it's possible for a CRTC to be driving an Output with a 
mode even if no monitor is actually connected to that Output. So if you 
just want to figure out which display heads (CRTCs) are active and what 
their refresh rates are, I would recommend ignoring the |connection| 
state completely and only look at the states of the CRTCs.


-- Aaron

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s


Re: XInitThreads multiple times

2021-08-10 Thread Aaron Plattner

On 8/5/21 8:36 AM, Keith Packard wrote:

Dawid Kowalczyk  writes:


Hello,

Is it possible to call |XInitThreads| multiple times, for example 20
times and not worry who calls it first?


XInitThreads isn't re-entrant, so you need to ensure that it isn't
getting invoked by multiple threads in parallel, but it does check to
see if it has been called before, so it is safe to call multiple times
in sequence.


Right, it's not thread-safe. From the man page:

DESCRIPTION
   The XInitThreads function initializes Xlib  support
   for  concurrent threads.  This function must be the
   first Xlib function a multi-threaded program calls,
   and **it must complete before any other Xlib call is
   made**.

(emphasis mine). Looking at the code, it looks like that rule includes 
other calls to XInitThreads and not just *other* Xlib functions.


-- Aaron