Re: Still pondering working with hashs

2010-05-11 Thread Harry Putnam
Shawn H Corey writes: Oh nice... thanks. Hope I can get to try this out later tonight... I have to go out for a while and can't get to it right now though. The main `for loop' near the end, and really, all of it, looks to be highly portable like the inversion code was I think that little in

Re: Still pondering working with hashs

2010-05-11 Thread Harry Putnam
Jim Gibson writes: Harry wrote: >> Shawn, hoping to pester you once more about this topic. Jim G responded: > It is not fair to single out Shawn for help. Just post your question > and hope for a response. Just a manner of speaking, but you're right it does appear to be a little off the wall.

Re: Still pondering working with hashs

2010-05-11 Thread Shawn H Corey
On 10-05-11 04:52 PM, Harry Putnam wrote: Some selective output first: [...] d1 ./dir1/etc/images/gnus/exit-summ.xpm d2(1) ./dir2/etc/images/gnus/exit-summ.xpm d1 ./dir1/etc/images/gnus/reply.xpm d2(1) ./dir2/etc/images/mail/reply.xpm d2(2) ./di

Re: Still pondering working with hashs

2010-05-11 Thread Jim Gibson
On 5/11/10 Tue May 11, 2010 1:52 PM, "Harry Putnam" scribbled: > Shawn H Corey writes: > >> Harry Putnam wrote: >>> But, is there an easier way? >> >> Invert both hashes and find the keys in both inverses. > > Shawn, hoping to pester you once more about this topic. It is not fair to single

Re: Still pondering working with hashs

2010-05-11 Thread Harry Putnam
Shawn H Corey writes: > Harry Putnam wrote: >> But, is there an easier way? > > Invert both hashes and find the keys in both inverses. Shawn, hoping to pester you once more about this topic. first: Hashes involved are built like this (Using File::Find nomenclature): (NOT CODE... Just descri

Re: Still pondering working with hashs

2010-05-04 Thread Harry Putnam
"Dr.Ruud" writes: > Harry Putnam wrote: > >> But could I have started there not a chance. So please engage >> some shred of common sense before routinely posting constant karping >> and even seriously wrong headed advice like this. > > You are very wrong here. Just put in a sentence what yo

Re: Still pondering working with hashs

2010-05-04 Thread Dermot
On 3 May 2010 19:47, Dr.Ruud wrote: > Harry Putnam wrote: > >> Yes, files that exist on multiple paths, but there are also >> many matched names that are not actually the same file. > > Then use md5, or a similar tool. Seconded. If you want to find duplicate files you will need to use MD5 or a S

Re: Still pondering working with hashs

2010-05-04 Thread Philip Potter
On 4 May 2010 15:19, Harry Putnam wrote: > Philip Potter writes: >> haven't explained *why* you are doing this comparison in this thread. >> [You might have done elsewhere, but I don't read every thread.] > Uri actually did (most of) it for me at one point in a recent post on > this thread.  Mess

Re: Still pondering working with hashs

2010-05-04 Thread Harry Putnam
Philip Potter writes: > You have to start with *some* goal. Even in agile, you start with > stories to work out in what direction you are headed. You formalise > your requirements into tests and then you start coding. Yes, you > revise your stories, requirements and tests as you learn more about

Re: Still pondering working with hashs

2010-05-04 Thread Harry Putnam
"Bob McConnell" writes: > I would expect his short range goals to be adjusted as he learns what is > possible and what it takes to accomplish it. That does require some Thank you sir for a decent summary of how this has gone so far. -- To unsubscribe, e-mail: beginners-unsubscr...@perl.org F

Re: Still pondering working with hashs

2010-05-04 Thread Harry Putnam
Philip Potter writes: > haven't explained *why* you are doing this comparison in this thread. > [You might have done elsewhere, but I don't read every thread.] Uri actually did (most of) it for me at one point in a recent post on this thread. Message-ID: <87mxwgbvyo@quad.sysarch.com> on

Re: Still pondering working with hashs

2010-05-04 Thread Dr.Ruud
Harry Putnam wrote: But could I have started there not a chance. So please engage some shred of common sense before routinely posting constant karping and even seriously wrong headed advice like this. You are very wrong here. Just put in a sentence what you really try to achieve. Not th

Re: Still pondering working with hashs

2010-05-04 Thread Dr.Ruud
Harry Putnam wrote: Yes, files that exist on multiple paths, but there are also many matched names that are not actually the same file. Then use md5, or a similar tool. -- Ruud -- To unsubscribe, e-mail: beginners-unsubscr...@perl.org For additional commands, e-mail: beginners-h...@perl.org

RE: Still pondering working with hashs

2010-05-04 Thread Bob McConnell
From: Philip Potter > On 4 May 2010 13:45, Bob McConnell wrote: >> From: Uri Guttman >> "HP" == Harry Putnam writes: >>> >>>   HP> "Uri Guttman" writes: >>>   >> nope. been doing this for 35 years and it is solid advice. you >> can't do >>>   >> a proper program unless you have a proper

Re: Still pondering working with hashs

2010-05-04 Thread Akhthar Parvez K
On Tuesday 04 May 2010, Philip Potter wrote: > On 3 May 2010 22:06, Harry Putnam wrote: > > That is the kind of `always true' thing one might say.  I forgot what > > the term is but it means its fairly meaningless and mainly sounds > > good. But none the less true. > > A tautology? No, it's not.

Re: Still pondering working with hashs

2010-05-04 Thread Philip Potter
On 4 May 2010 13:45, Bob McConnell wrote: > From: Uri Guttman > >>> "HP" == Harry Putnam writes: >> >>   HP> "Uri Guttman" writes: >>   >> nope. been doing this for 35 years and it is solid advice. you > can't do >>   >> a proper program unless you have a proper goal which is what the >>   >

RE: Still pondering working with hashs

2010-05-04 Thread Bob McConnell
From: Uri Guttman >> "HP" == Harry Putnam writes: > > HP> "Uri Guttman" writes: > >> nope. been doing this for 35 years and it is solid advice. you can't do > >> a proper program unless you have a proper goal which is what the > >> specification is. > > HP> Some of it looks suspi

Re: Still pondering working with hashs

2010-05-04 Thread Philip Potter
On 3 May 2010 22:06, Harry Putnam wrote: > Philip Potter writes: > > [...] > > Both you and Uri are right to a degree.  I have to respect Uris' > experience, but in fact I have presented goals at every step in this > thread.  Uri just doesn't want to recognize them. > > 1) how to find what files

Re: Still pondering working with hashs

2010-05-03 Thread Uri Guttman
> "HP" == Harry Putnam writes: HP> "Uri Guttman" writes: >> nope. been doing this for 35 years and it is solid advice. you can't do >> a proper program unless you have a proper goal which is what the >> specification is. HP> Thank you Uncle Uri. I guess I just don't yet know how

Re: Still pondering working with hashs

2010-05-03 Thread Harry Putnam
"Uri Guttman" writes: > nope. been doing this for 35 years and it is solid advice. you can't do > a proper program unless you have a proper goal which is what the > specification is. Thank you Uncle Uri. I guess I just don't yet know how to make use all you present as help. Some of it looks s

Re: Still pondering working with hashs

2010-05-03 Thread Harry Putnam
Philip Potter writes: [...] Both you and Uri are right to a degree. I have to respect Uris' experience, but in fact I have presented goals at every step in this thread. Uri just doesn't want to recognize them. 1) how to find what files are in one tree but not the other I've received a few

Re: Still pondering working with hashs

2010-05-03 Thread Philip Potter
On 3 May 2010 20:56, Uri Guttman wrote: >> "HP" == Harry Putnam writes: >  HP> "Uri Guttman" writes: >  >> this is what has been bothering me here. you haven't yet spit out a >  >> proper problem specification. as i kept saying comparing dir trees is >  >> tricky and you kept showing incompl

Re: Still pondering working with hashs

2010-05-03 Thread Uri Guttman
> "HP" == Harry Putnam writes: HP> "Uri Guttman" writes: >> this is what has been bothering me here. you haven't yet spit out a >> proper problem specification. as i kept saying comparing dir trees is >> tricky and you kept showing incomplete examples which now all seem to be >> wr

Re: Still pondering working with hashs

2010-05-03 Thread Harry Putnam
Jim Gibson writes: > The usual purpose of using Data::Dumper (DD) is to print out a complex data What an excellent synopsis of how DD fits into stuff like this. Thanks a lot... I was on the verge of starting to really pound away trying to learn DD and how to use it. I mean making it the guts

Re: Still pondering working with hashs

2010-05-03 Thread Harry Putnam
"Uri Guttman" writes: > this is what has been bothering me here. you haven't yet spit out a > proper problem specification. as i kept saying comparing dir trees is > tricky and you kept showing incomplete examples which now all seem to be > wrong as you just made a major change in the 'spec'. now

Re: Still pondering working with hashs

2010-05-03 Thread Uri Guttman
> "HP" == Harry Putnam writes: HP> Its actually a little more complex than that... since some of the HP> matches are the right file but do have slight differences that will HP> show in the sizes. Not so many like that... but I have seen a few so HP> far. this is what has been bothe

Re: Still pondering working with hashs

2010-05-03 Thread Jim Gibson
On 5/3/10 Mon May 3, 2010 9:35 AM, "Harry Putnam" scribbled: [long problem description snipped] > I'm doing that without involving Data::Dumper. > > Do you think it would be better done using Data::Dumper? > > First I'd have to learn how to use Data::Dumper The usual purpose of using Data:

Re: Still pondering working with hashs

2010-05-03 Thread Harry Putnam
"Dr.Ruud" writes: [...] snipped poorly written example > I think you want to find filenames that exist in multiple paths. As John K. has noted... my example was misleading. If you were to prepend a different root directory to each list of filenames in my example it would be much more like what

Re: Still pondering working with hashs

2010-05-03 Thread Harry Putnam
Shawn H Corey writes: [...] > If you want to find those in one but no the other, change the if: > > for my $name ( keys %inverse_h1 ){ > if( exists $inverse_h2{$name} ){ > # print "$name exists in both hashes:\n", > # Data::Dumper->Dump( [ $inverse_h1{$name}, > $inverse_h2{$name

Re: Still pondering working with hashs

2010-05-03 Thread Harry Putnam
"John W. Krahn" writes: > Harry Putnam wrote: >> >> my %h1 = ( >>'./b/f1' => 'f1', >>'./b/c/fa' => 'fa', >>'./b/l/c/f2' => 'f2', >>'./b/g/f/r/fb' => 'fb' >>); >> >> >> my %h2 = ( >>'./b/fb'=> 'fb',

Re: Still pondering working with hashs

2010-05-03 Thread Dr.Ruud
Harry Putnam wrote: What I hope to do is compare hashes and a few different ways. determine what is in one and not in the other for example. These exmple hashes are supposed to represent file names. The hashes are created by making the key the full path and file name, and the value just the e

Re: Still pondering working with hashs

2010-05-03 Thread Shawn H Corey
Harry Putnam wrote: Shawn H Corey writes: Harry Putnam wrote: But, is there an easier way? Invert both hashes and find the keys in both inverses. [...] Thanks for the nice working script... Lots to learn there. But not sure how to get at the information I asked about with it. Maybe beca

Re: Still pondering working with hashs

2010-05-02 Thread John W. Krahn
Harry Putnam wrote: my %h1 = ( './b/f1' => 'f1', './b/c/fa' => 'fa', './b/l/c/f2' => 'f2', './b/g/f/r/fb' => 'fb' ); my %h2 = ( './b/fb'=> 'fb', './b/c/fd' => 'fd', './b

Re: Still pondering working with hashs

2010-05-02 Thread Harry Putnam
Shawn H Corey writes: > Harry Putnam wrote: >> But, is there an easier way? > > Invert both hashes and find the keys in both inverses. [...] Thanks for the nice working script... Lots to learn there. But not sure how to get at the information I asked about with it. Maybe because there was a t

Re: Still pondering working with hashs

2010-05-02 Thread Shawn H Corey
Harry Putnam wrote: But, is there an easier way? Invert both hashes and find the keys in both inverses. #!/usr/bin/perl use strict; use warnings; use Data::Dumper; # Make Data::Dumper pretty $Data::Dumper::Sortkeys = 1; $Data::Dumper::Indent = 1; # Set maximum depth for Data::Dumper, zer