Hi Ganesh,
I have done the same kind of thing you are trying to do with shmget/shmat, but I used mmap. With mmap, I needed to specify the address (first arg to mmap(2)) with the MAP_FIXED flag. Have you tried specifying an address instead of the "(void*)0"?
(I admit, using SHM_SHARE_MMU flag seems like it would end up giving the
same address, but that is only an assumption).

max

Borse, Ganesh wrote:
No, it is not the address (the 2nd param) passed to shmat calls. I'm passing 
"(void*)0".
It is the address returned by shmat, after attaching the segment.

My code is as below:-
int id = shmget(key,size,(0777 | IPC_CREAT | IPC_EXCL));
if(errno == EEXIST){
  id = shmget(key,size,(0777 | IPC_CREAT));
}

suht->_dataMemSegments[idx]._dataMemory = (pSUHT_ITEM)shmat(id,  (void *)0, 
0777 | SHM_SHARE_MMU);
if(NULL == suht->_dataMemSegments[idx]._dataMemory || errno != 0){
   printf("shm:_dataMemory:failed: errno:%d\n",errno);
   return errno;
}

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: 04 April 2007 16:49
To: Borse, Ganesh
Cc: 'perf-discuss@opensolaris.org'
Subject: Re: [perf-discuss] Request for guidance on shmat() call attaching at 
different addre ss

Hi Genesh,
Are you specifying an address in your shmat calls? Can you post code segments doing shmget/shmat?
max

Borse, Ganesh wrote:
Hi,
I am facing a problem of shared memory segment getting attached at different address, when the processes using it go down and attach count goes to zero.

This causes the problem when the processes next time come up & try to access the addresses taken from the shared memory segment.

The scenario in brief:-
I've 3 processes using multiple shared memory segments. Shared memory segments (shm) are also used in hierarchical fashion.

One shm is used as a high level data structure to hold pointers to other shms. Second shm is an array of pointers to structures (data blocks) which will get allocated in the third shm. For the other 2 processes to access these blocks, their addresses will be stored in this second shm (an array).

The third shm actually holds the data as retrieved by the server process from database. This is acting as cache. When the third shm gets completely used, the server process creates new shm and starts storing data into it. Once all data gets stored, server process goes down - without removing any shm.

Next when the client processes comes up, they attach to all these shms created by server process. When the client process calls "shmat" on the first 3 memory segments, shmat returns the same address which it returned to server process (these addresses are stored in 1st shm).

But, when client process tries to attach to 4th segment (which was created after the third data shm filled up), the shmat returns totally different address than that of returned to the server process.

Since, the addresses stored in array shm correspond to the different shm address, when the client process tries to access those addresses, it crashes with SIGSEGV.

I am using SHM_SHARE_MMU flag with shmat to use the feature of ISM & to get same address.

Any clue/ hint why only the 4th (last segment) shmat returns different address? Is this the normal behavior?
Could I be doing something wrong only while creating this segment?

Please help.

Thanks and Regards,
GaneshB

======================================================================
======== Please access the attached hyperlink for an important electronic communications disclaimer:

http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
======================================================================
========
----------------------------------------------------------------------
--

_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org


==============================================================================
Please access the attached hyperlink for an important electronic communications disclaimer:
http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
==============================================================================



_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to