Re: Diagnosing first vs subsequent performance

2016-01-21 Thread Lloyd Brown
Just so I know, is there an xorg.conf equivalent of the "-sharevts" cli parameter? I see (but haven't tested) a "DontVTSwitch" documented, which I assume is the equivalent of "-novtswtich". But I haven't seen anything for "-sharevts". It's not a huge deal either way. Just a personal preference

Re: Diagnosing first vs subsequent performance

2016-01-21 Thread Solerman Kaplon
Em 20-01-2016 20:09, Aaron Plattner escreveu: You can work around this problem somewhat by using the -sharevts and -novtswitch options to make the X servers be active simultaneously, but please be aware that this configuration is not officially supported so you might run into strange and unexpect

Re: Diagnosing first vs subsequent performance

2016-01-20 Thread Lloyd Brown
Wow, Aaron. I should've known that you'd have the answer. Thanks, btw, for answering my earlier, semi-related questions on the nvidia forum: https://devtalk.nvidia.com/default/topic/840157/non-root-xorg-with-nvidia-driver/ In short, adding the sharevts and novtswitch to the command-line for the

Re: Diagnosing first vs subsequent performance

2016-01-20 Thread Aaron Plattner
My guess is that each X server you start is switching to its own VT. Since you're running Xorg by itself, there are initially no clients connected. When you run an application such as glxinfo that exits immediately, or kill your copy of glxgears, it causes the server to reset, which makes it initi

Re: Diagnosing first vs subsequent performance

2016-01-20 Thread Thomas Lübking
On Mittwoch, 20. Januar 2016 21:03:35 CEST, Lloyd Brown wrote: [lbrown@m8g-1-8 ~]$ DISPLAY=:0.0 glxgears Running synchronized to the vertical refresh. The framerate should be approximately the same as the monitor refresh rate. try DISPLAY=:0.0 __GL_GSYNC_ALLOWED=0 __GL_SYNC_TO_VBLANK=0 glxge

Re: Diagnosing first vs subsequent performance

2016-01-20 Thread Ken Moffat
On Wed, Jan 20, 2016 at 01:03:35PM -0700, Lloyd Brown wrote: > Something else very odd, as well: > > I was just running glxgears (good performance) on one display, and then > when I ran glxinfo on a second display, the glxgears performance dropped > significantly, and glxgears disappeared from the

Re: Diagnosing first vs subsequent performance

2016-01-20 Thread Lloyd Brown
Something else very odd, as well: I was just running glxgears (good performance) on one display, and then when I ran glxinfo on a second display, the glxgears performance dropped significantly, and glxgears disappeared from the output of nvidia-smi. Here's some example output from the glxgears; y

Re: Diagnosing first vs subsequent performance

2016-01-20 Thread Lloyd Brown
It's true. A glxinfo first, is enough to get things working, so that's a workaround. Not ideal, but something. As far as dmesg, the only new output after a slow-instance of glxgears, is a line like this: > vgaarb: this pci device is not a vga device But since the tesla is not a VGA device, tha

Re: Diagnosing first vs subsequent performance

2016-01-20 Thread Thomas Lübking
Check dmesg, notably for NVRM, right after the failed gl call. I assume any initial gl call will do, ie. running glxinfo will lead to a first instance glxgears on the GPU? Cheers, Thomas ___ xorg@lists.x.org: X.Org support Archives: http://lists.freede

Re: Diagnosing first vs subsequent performance

2016-01-20 Thread Ilya Anfimov
On Tue, Jan 19, 2016 at 12:54:15PM -0700, Lloyd Brown wrote: > Sure. I've generated them using something like this: > > > $ for i in {0..3}; do echo "Testing display $i"; for j in {1..2}; do > > echo "Instance $j"; DISPLAY=:${i}.0 glxinfo > > > /tmp/glxinfo.display${i}.instance${j}; echo "sleepin

Re: Diagnosing first vs subsequent performance

2016-01-19 Thread Lloyd Brown
Well, in short, it didn't quite work as hoped. I generated a minimal config (example attached), and so far, it seemed to act the same as before. Slow on the first one, and faster on 2nd and subsequent. Now that's a mixed blessing. It means that most of the file wasn't actually necessary, but it

Re: Diagnosing first vs subsequent performance

2016-01-19 Thread Lloyd Brown
Sure. I've generated them using something like this: > $ for i in {0..3}; do echo "Testing display $i"; for j in {1..2}; do > echo "Instance $j"; DISPLAY=:${i}.0 glxinfo > > /tmp/glxinfo.display${i}.instance${j}; echo "sleeping 30s"; sleep 30; > echo "---"; done; echo "sleeping 5s"; sleep 5; echo

Re: Diagnosing first vs subsequent performance

2016-01-19 Thread Ilya Anfimov
On Tue, Jan 19, 2016 at 11:54:43AM -0700, Lloyd Brown wrote: > I will try, but honestly, I'm not certain if it will instantiate the > server or not. > > I was mostly following the recommendations from the "Setting up the X > Server for Headless Operation" (pg 15) of this document: > http://www.nv

Re: Diagnosing first vs subsequent performance

2016-01-19 Thread Lloyd Brown
I will try, but honestly, I'm not certain if it will instantiate the server or not. I was mostly following the recommendations from the "Setting up the X Server for Headless Operation" (pg 15) of this document: http://www.nvidia.com/content/PDF/remote-viz-tesla-gpus.pdf On 01/19/2016 11:19 AM,

Re: Diagnosing first vs subsequent performance

2016-01-19 Thread Christopher Barry
On Tue, 19 Jan 2016 09:03:49 -0700 Lloyd Brown wrote: >Hi, all. > >I hope this isn't too dumb of a question, but I'm having trouble >finding anything on it so far. Not sure if my google-fu is just not >up to the task today, or if it's genuinely an obscure problem. > >I'm in the middle of settin

Diagnosing first vs subsequent performance

2016-01-19 Thread Lloyd Brown
Hi, all. I hope this isn't too dumb of a question, but I'm having trouble finding anything on it so far. Not sure if my google-fu is just not up to the task today, or if it's genuinely an obscure problem. I'm in the middle of setting up an HPC node with 2 NVIDIA Tesla K80s (4 total GPUs), for s