Thanks a lot for sharing your inputs, guys... On Thu, Mar 31, 2011 at 6:47 AM, Drew Kutcharian <d...@venarc.com> wrote:
> Hi Ed, > > Cool, I guess we both read/interpreted his post differently and gave two > valid answers ;) > > - Drew > > On Mar 30, 2011, at 5:40 PM, Ed Anuff wrote: > > > Hey Drew, I'm somewhat familiar with Snowflake, and it's certainly a > > good option, but, my impression was that the main reason to use it is > > because you find the 128-bits for a UUID overkill, not because it's > > doing anything you can't do with UUID's. The difference in time > > resolution between UUIDs and Snowflake ids is actually greater than > > the size of the sequence value that Snowflake uses to differentiate > > duplicated timestamps, so the easiest thing would be just to round to > > milliseconds unless your goal was to save the extra 64 bits per UUID. > > I was just over-reading into Roshan's question that he wanted the full > > time resolution of a UUID and on top of that be able to have a number > > of duplicate timestamps. > > > > On Wed, Mar 30, 2011 at 4:24 PM, Drew Kutcharian <d...@venarc.com> > wrote: > >> Hi Ed, > >> > >> There's no need to re-invent the wheel that's pretty much what Twitter > Snowflake does. The way it works is it creates a 64 bit long id which is > formatted as such > >> > >> time_bits : data_center_id : machine_id : sequence > >> > >> Where time_bits are the milliseconds since a custom epoch. > >> > >> So If you see, you would get ids that are unique and ordered by time up > to 1ms (if two ids were created during the same millisecond, then the > ordering is not preserved) > >> > >> - Drew > >> > >> > >> On Mar 30, 2011, at 4:13 PM, Ed Anuff wrote: > >> > >>> If I understand the question, it's not that > >>> UUIDGen.makeType1UUIDFromHost(InetAddress.getLocalHost()) is returning > >>> duplicate UUID's. It should always be giving unique time-based uuids > >>> and has checks to make sure it does. The question was whether it was > >>> possible to get multiple unique time-based UUID's with the exact same > >>> timestamp component, rather than avoiding duplicates in the timestamp > >>> the way UUIDGen currently does. The answer to that is that you could > >>> take a look at the code for the UUIDGen class and create your own > >>> version that perhaps generated the clock sequence in a different way, > >>> such as leaving a certain number of low order bits of the clock > >>> sequence empty and then incrementing those when duplicate timestamps > >>> were generated rather than incrementing the timestamp the way UUIDGen > >>> currently does. > >>> > >>> On Wed, Mar 30, 2011 at 10:08 AM, Drew Kutcharian <d...@venarc.com> > wrote: > >>>> Hi Roshan, > >>>> You probably want to look at Twitter's > >>>> Snowflake: https://github.com/twitter/snowflake > >>>> There's also another Java variant: https://github.com/earnstone/eid > >>>> - Drew > >>>> > >>>> On Mar 30, 2011, at 6:08 AM, Roshan Dawrani wrote: > >>>> > >>>> Hi, > >>>> Is there any way I can get multiple unique time UUIDs for the same > timestamp > >>>> value - I mean, the UUIDs that are same in their time (most > significant > >>>> bits), but differ in their least significant bits? > >>>> The least significant bits added by > >>>> me.prettyprint.cassandra.utils.TimeUUIDUtils seem to be a fixed value > based > >>>> on mac/ip address, which makes sure that I get the same UUID for a > timestamp > >>>> value, everytime I ask. > >>>> I need the "(timestamp): <some value>" kind of columns that need to be > >>>> sorted by time, and I wanted to use TimeUUID to use column sorting > that > >>>> comes out-of-the-box, but the problem is that I can get multiple > values for > >>>> the same timestamp. > >>>> So, I am looking for some way where the time portion is same, but the > other > >>>> UUID half is different so that I can safely store "1 time UUID: 1 > value". > >>>> Any help there is appreciated. > >>>> -- > >>>> Roshan > >>>> Blog: http://roshandawrani.wordpress.com/ > >>>> Twitter: @roshandawrani > >>>> Skype: roshandawrani > >>>> > >>>> > >>>> > >> > >> > >