Ganesh: > Next when the client processes comes up, they attach to all these shms > created by server process. When the client process calls "shmat" on > the first 3 memory segments, shmat returns the same address which it > returned to server process (these addresses are stored in 1st shm). > But, when client process tries to attach to 4th segment (which was > created after the third data shm filled up), the shmat returns totally > different address than that of returned to the server process. > > Since, the addresses stored in array shm correspond to the different > shm address, when the client process tries to access those addresses, > it crashes with SIGSEGV.
It does look like there is any guarantee of determinstic address mapping if you pass (void *)0 as the address to this call. The manpage states: o If shmaddr is equal to (void *) 0 and ( shmflg&SHM_SHARE_MMU) or (shmflg&SHM_PAGEABLE) is true, then the segment is attached to the first available suitably aligned address. When (shmflg&SHM_SHARE_MMU) or (shmflg&SHM_PAGEABLE) is set, however, the permission given by shmget() determines whether the segment is attached for reading or reading and writing. So it sounds like the first available and suitably aligned address may not be the same in all situations. It sounds like you have two possible ways of solving this problem. 1. Instead of accessing the data in the client based upon the server's address, use the shmid's from the server a build another set of mappings based upon the addresses returned to the client from shmat() 2. Since you have the addresses at which these segments were mapped in the server, you could try to map them into client in the same space. This requires that a) the address is properly aligned, b) is a valid user range, c) is not already mapped in your process. If this fails, you'll have to fallback and re-map the segment somewhere else, which means you'll probably have to do something like #1 in case #2 fails. I'm sure there are other options that are less obvious; however, these two came to mind immediately. -j _______________________________________________ perf-discuss mailing list perf-discuss@opensolaris.org