Just for my curiosity: is there a function that translates major/minor in something readable ?
re, wh Am 30.03.2016 20:48, schrieb Ingo Bürk: > Hi Lloyd, > > Adam already decoded the opcode for you. Just a quick Google search of > request name + "BadAlloc" gives at least a few results. It might be > worth checking those out. I'm not familiar with GLX, unfortunately. > > > Regards > Ingo > > On 03/30/2016 08:38 PM, Lloyd Brown wrote: >> Ingo, >> >> Thank you for this. >> >> Just for clarification, are we talking about system RAM or video card's RAM? >> >> The reason I ask is this. Since we're an HPC lab, we do limit system >> memory via memory cgroups, based on what the user's job requested. But >> since seeing your email, I've gone as high as 64GB in my request, >> verified that the cgroup reflected that, and the problem still >> occurred. If we're talking about video card's RAM, we don't >> artificially limit it at all, and the card in question is a Tesla K80, >> which has 2 GPUs, and 12GB of video RAM per GPU. >> >> I wonder if there's some other limit going on, that I'm not aware of. >> >> Maybe it makes more sense to contact the Paraview software community, at >> this point. They may have a better idea where this could be going wrong. >> >> Thanks for the info, though. It was exactly the sort of thing I was >> hoping for. >> >> Lloyd >> >> >> >> >> On 03/30/2016 12:18 PM, Ingo Bürk wrote: >>> Hi Lloyd, >>> >>> see here: http://www.x.org/wiki/Development/Documentation/Protocol/OpCodes/ >>> >>> In your case you are trying to allocate way too much memory. This can >>> happen, for example, if you by accident try to create enormously large >>> pixmaps. Of course there's many things that can cause this. Decoding the >>> opcode will help you debug it. >>> >>> >>> Regards >>> Ingo >>> >>> On 03/30/2016 06:03 PM, Lloyd Brown wrote: >>>> Can anyone help me understand where the error messages, especially the >>>> major and minor opcodes, come from in an error like this one? Are these >>>> defined by Xorg, by the driver (Nvidia, in this case), or somewhere else >>>> entirely? >>>> >>>>> X Error of failed request: BadAlloc (insufficient resources for >>>>> operation) >>>>> Major opcode of failed request: 135 (GLX) >>>>> Minor opcode of failed request: 34 () >>>>> Serial number of failed request: 26 >>>>> Current serial number in output stream: 27 >>>>> >>>> So, here's the background. I'm launching Xorg to manage the GLX context >>>> for some processing applications. When I use things like glxgears, >>>> glxspheres64 (from the VirtualGL project), glxinfo, or glmark2, >>>> everything works well. But when I use the actual user application >>>> (pvserver, part of Paraview), it gives me this error shortly after I >>>> connect my paraview frontend, to the pvserver backend. >>>> >>>> Running the pvserver inside gdb, with a "break exit", lets me backtrace >>>> it, but all it really tells me is that it's occurring when the >>>> application is trying to establish it's context. >>>> >>>> I can continue to dink around with it, but if anyone can at least point >>>> me in the right direction, that would be helpful. >>>> >>>> Thanks, >>>> > > _______________________________________________ > xorg@lists.x.org: X.Org support > Archives: http://lists.freedesktop.org/archives/xorg > Info: https://lists.x.org/mailman/listinfo/xorg > Your subscription address: %(user_address)s _______________________________________________ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: https://lists.x.org/mailman/listinfo/xorg Your subscription address: %(user_address)s