bug#12339: Bug: rm -fr . doesn't dir depth first deletion yet it is documented to do so.

2012-09-06 Thread Linda Walsh



Jim Meyering wrote:

Thanks for the patch, but it would be pretty rotten for GNU rm to make
it so "rm -rf ." deletes everything under ".", while all other vendor
rm programs diagnose the POSIX-mandated error.  People would curse us
for making GNU rm remove their precious files when they accidentally ran
that command. 

---

Just like people who ran "rm -fr * in /" and didn't get their POSIX
mandated behavior, would curse you?

You are playing Mommy, to people and not allowing them to do what
they are asking the computer to do.

You disabling rm's ability to function.  There is no way for it to
remove files in it's directory and not the current directory without
this patch.  It cripples the program.  To do the same job requires
use of auxiliary programs.

** Users would already be cursing those who deliberately crippled
** usability in the name of compatibility.

Unix didn't have Windows-ish mentality on the command line...
but wouldn't unix users curse those who brought such to the unix
command line?

"There needs to be someone willing to to step up for user
freedom, and Gnu used to be that... but it seems they are
becoming corrupt like every big organization" -- is that what
you wish to become Gnu's epitaph?  Users who saw Gnu as
a freedom supporting symbol will see this as "selling-out"
functionality in order to become respectable.

You are claim users would curse not being able to get an error message
for something they deliberately type in.

Whereas I describe users who curse due to you removing
functionality.   You really would rather support those
who are looking for error messages over those looking for
functionality?  I would question the wisdom of taking
that approach, especially when you've been clearly called on it.

GNU needs to be clear their priorities -- maintaining software
freedom, or bowing down to corporate powers...  POSIX isn't
a user group -- it's an enforcement arm of a Corporate
Entity that seeks to create a proprietary vision (it is owned
by them -- Unix, POSIX, The OpenGroup... are all owned by
the corporate entity who holds those as assets -- (see the legal
pages at their website:
like http://www.opengroup.org/content/legal-frequently-asked-questions).

Since when does Gnu put corporate interests before users?






bug#12339: Bug: rm -fr . doesn't dir depth first deletion yet it is documented to do so.

2012-09-06 Thread Bernhard Voelker



On September 6, 2012 at 12:56 PM Linda Walsh  wrote:

> Jim Meyering wrote:
> > Thanks for the patch, but it would be pretty rotten for GNU rm to make
> > it so "rm -rf ." deletes everything under ".", while all other vendor
> > rm programs diagnose the POSIX-mandated error.  People would curse us
> > for making GNU rm remove their precious files when they accidentally ran
> > that command.
> ---
>
> Just like people who ran "rm -fr * in /" and didn't get their POSIX
> mandated behavior, would curse you?
>
> You are playing Mommy, to people and not allowing them to do what
> they are asking the computer to do.
>
> [... and ~40 lines re. Jim, GNU, POSIX, the universe ...]

Dear Linda,

why don't you stick to the point?

You provided a patch which changes the *default behaviour* of rm,
and Jim told you that we can't/shouldn't do this.

What you want was an option to delete the content of a directory.
So why discussing all the world and his brother instead of providing
a new patch introducing such a new option (e.g. --only-dir-content,
there should be a better name)?
Generalizations like that don't help here, IMHO.

Have a nice day,
Berny





bug#12339: Bug: rm -fr . doesn't dir depth first deletion yet it is documented to do so.

2012-09-06 Thread Jim Meyering
Linda Walsh wrote:
...
> GNU needs to be clear their priorities -- maintaining software
> freedom, or bowing down to corporate powers...  POSIX isn't

While POSIX is in general a very good baseline, no one here conforms
blindly.  If POSIX is wrong, we'll lobby to change it, or, when
that fails, maybe relegate the undesirable required behavior to when
POSIXLY_CORRECT is set, or even simply ignore it.  In fact, over the
years, I have deliberately made a few GNU tools contravene some aspects
of POSIX-specified behavior that I felt were counterproductive.

We try to make the tools as useful as possible, sometimes adding features
when we deem them worthwhile.  However, we are very much against changing
the *default* behavior (behavior that has been that way for over 20
years and that is compatible with all other vendor-supplied rm programs)
without a very good reason.





bug#12365: Incorrect return value of cp with no-clobber option

2012-09-06 Thread Anoop Sharma
When -n (--no-clobber) option of cp is used and the DEST file exists, then, as
expected, cp is not able to copy the SOURCE to DEST.

However, cp returns 0 in this case.

Shouldn't it return 1 to indicate that copy operation could not be completed?

In absence of this indication how is one to know that some recovery
action like re-trying cp with some other DEST name is required?

Regards,
Anoop





bug#12366: [gnu-prog-discuss] Writing unwritable files

2012-09-06 Thread Paolo Bonzini
[For bug-coreutils: the context here is that sed -i, just like perl -i,
breaks hard links and thus destroys the content of files with 0400
permission].

Il 06/09/2012 12:38, John Darrington ha scritto:
>  That's expected of programs that break hard links.
> 
> I wonder how many users who are not hackers expect it?  I suspect most 
> would not.
> 
> Why is it not possible or not desirable to check the mode of the file
> before renaming?

Because there are valid use cases in which you _want_ to break hard links.

Also because it would be twice as slow and require twice as much disk
space.  The choices would be:

1) copy the input file to a temporary file, then write to the original
file while processing the copy; required space = input_size +
max(input_size, output_size)

2) write to a temporary file while processing the original, then copy it
over to the input; required space = output_size + max(input_size,
output_size).

3) same as (1) or (2) with memory replacing a temporary file.  Goes
totally against the idea of sed as a _stream_ editor.

>  Here is what the sed manual thinks.
> 
> Thanks for pointing that out (it must be a recent addition; my installed
> version doesn't have this text).

Actually it is older than the version control repository, so 2004 or
older.  But it is under "Reporting bugs", not under the -i option
(patches welcome ;)).

You'll also find it under /usr/share/doc/sed-4.2.1/BUGS.

>  `-i' clobbers read-only files
>   In short, `sed -i' will let you delete the contents of a read-only
>   file, and in general the `-i' option (*note Invocation: Invoking
>   sed.) lets you clobber protected files.  This is not a bug, but
>   rather a consequence of how the Unix filesystem works.
>  
>   The permissions on a file say what can happen to the data in that
>   file, while the permissions on a directory say what can happen to
>   the list of files in that directory.  `sed -i' will not ever open
>   for writing  a file that is already on disk.  Rather, it will work
>   on a temporary file that is finally renamed to the original name:
>   if you rename or delete files, you're actually modifying the
>   contents of the directory, so the operation depends on the
>   permissions of the directory, not of the file.  For this same
>   reason, `sed' does not let you use `-i' on a writeable file in a
>   read-only directory, and will break hard or symbolic links when
>   `-i' is used on such a file.
> 
> I don't think that this addresses the issue at hand.  The program does 
> something
> non-intuitive that can lead to loss of data, and it wouldn't be reasonable to 
> blame 
> the user in that instance.

I agree, but it's not me who designed the Unix filesystem permissions.

> (Some) other GNU programs don't behave like this.  For 
> example "truncate foo" on a readonly foo will exit with an error

Truncate does not process foo for input at the same time, so it isn't't
really relevant.

> , as will  "dd if=foo of=foo count=10".

dd has a well-defined behavior for overlapping input and output, and
this well-defined behavior in fact mandates that dd doesn't break hard
links.

> Likewise, "shuf foo -o foo".

I consider "shuf foo -o foo" (on a read-write file) to be insecure.
Besides, it works by chance just because it reads everything in memory
first.  If it used mmap to process the input file, "shuf foo -o foo"
would be broken, and the only way to fix it would be to do the same as
"sed -i".

shuf could in fact introduce a "shuf -i" mode that would be consistent
with the way "sed -i" works, including the ability to create a backup
file _and_ the breaking of hard links.

Paolo





bug#12366: [gnu-prog-discuss] Writing unwritable files

2012-09-06 Thread Pádraig Brady

On 09/06/2012 01:12 PM, Paolo Bonzini wrote:

[For bug-coreutils: the context here is that sed -i, just like perl -i,
breaks hard links and thus destroys the content of files with 0400
permission].

Il 06/09/2012 12:38, John Darrington ha scritto:

  That's expected of programs that break hard links.

I wonder how many users who are not hackers expect it?  I suspect most
would not.

Why is it not possible or not desirable to check the mode of the file
before renaming?


Because there are valid use cases in which you _want_ to break hard links.

Also because it would be twice as slow and require twice as much disk
space.  The choices would be:

1) copy the input file to a temporary file, then write to the original
file while processing the copy; required space = input_size +
max(input_size, output_size)

2) write to a temporary file while processing the original, then copy it
over to the input; required space = output_size + max(input_size,
output_size).

3) same as (1) or (2) with memory replacing a temporary file.  Goes
totally against the idea of sed as a _stream_ editor.


  Here is what the sed manual thinks.

Thanks for pointing that out (it must be a recent addition; my installed
version doesn't have this text).


Actually it is older than the version control repository, so 2004 or
older.  But it is under "Reporting bugs", not under the -i option
(patches welcome ;)).

You'll also find it under /usr/share/doc/sed-4.2.1/BUGS.


  `-i' clobbers read-only files
   In short, `sed -i' will let you delete the contents of a read-only
   file, and in general the `-i' option (*note Invocation: Invoking
   sed.) lets you clobber protected files.  This is not a bug, but
   rather a consequence of how the Unix filesystem works.

   The permissions on a file say what can happen to the data in that
   file, while the permissions on a directory say what can happen to
   the list of files in that directory.  `sed -i' will not ever open
   for writing  a file that is already on disk.  Rather, it will work
   on a temporary file that is finally renamed to the original name:
   if you rename or delete files, you're actually modifying the
   contents of the directory, so the operation depends on the
   permissions of the directory, not of the file.  For this same
   reason, `sed' does not let you use `-i' on a writeable file in a
   read-only directory, and will break hard or symbolic links when
   `-i' is used on such a file.

I don't think that this addresses the issue at hand.  The program does something
non-intuitive that can lead to loss of data, and it wouldn't be reasonable to 
blame
the user in that instance.


I agree, but it's not me who designed the Unix filesystem permissions.


(Some) other GNU programs don't behave like this.  For
example "truncate foo" on a readonly foo will exit with an error


Truncate does not process foo for input at the same time, so it isn't't
really relevant.


, as will  "dd if=foo of=foo count=10".


dd has a well-defined behavior for overlapping input and output, and
this well-defined behavior in fact mandates that dd doesn't break hard
links.


Likewise, "shuf foo -o foo".


I consider "shuf foo -o foo" (on a read-write file) to be insecure.
Besides, it works by chance just because it reads everything in memory
first.  If it used mmap to process the input file, "shuf foo -o foo"
would be broken, and the only way to fix it would be to do the same as
"sed -i".

shuf could in fact introduce a "shuf -i" mode that would be consistent
with the way "sed -i" works, including the ability to create a backup
file _and_ the breaking of hard links.


Well `sort` and `shuf` need to read all their input before
generating output. This is traditional behavior and
POSIX also states that -o can refer to one of the input files.

Also related to this is having a seperate "replace" wrapper,
that would handle all the atomic, backup, permission, ... issues.
http://www.pixelbeat.org/docs/unix_file_replacement.html
I takes advantage of the existing coreutils to handle the above.
I really need to polish that off and submit it
(translations in the shell script was one thing that was bothering me).

I notice David Wheeler proposed much the same thing
with the "rewrite" util (I like that name too):
http://permalink.gmane.org/gmane.comp.standards.posix.austin.general/5348

cheers,
Pádraig





bug#12366: [gnu-prog-discuss] Writing unwritable files

2012-09-06 Thread Paolo Bonzini
Il 06/09/2012 14:30, Pádraig Brady ha scritto:
>>
>> I consider "shuf foo -o foo" (on a read-write file) to be insecure.
>> Besides, it works by chance just because it reads everything in memory
>> first.  If it used mmap to process the input file, "shuf foo -o foo"
>> would be broken, and the only way to fix it would be to do the same as
>> "sed -i".
>>
>> shuf could in fact introduce a "shuf -i" mode that would be consistent
>> with the way "sed -i" works, including the ability to create a backup
>> file _and_ the breaking of hard links.
> 
> Well `sort` and `shuf` need to read all their input before
> generating output.

Yes, but they could use mmap instead of a single large buffer to cope
with files that are bigger than the available memory, but smaller than
the address space.

> This is traditional behavior and
> POSIX also states that -o can refer to one of the input files.

Interesting, thanks!

Paolo





bug#12366: [gnu-prog-discuss] Writing unwritable files

2012-09-06 Thread Bernhard Voelker



On September 6, 2012 at 2:12 PM Paolo Bonzini  wrote:

> [For bug-coreutils: the context here is that sed -i, just like perl -i,
> breaks hard links and thus destroys the content of files with 0400
> permission].

I consider this being 2 different cases:

* 'sed -i' breaks hard links:
That's because it places the output at place where
the input file was (by unlink+rename).
That's okay IMO.

* 'sed -i' destroys the content of files with 0400 perms:
That's a bug IMHO. sed should open the input file read-write.
If that fails, then the input won't change with a nice diagnostic.

In 'sort -o', we recently added a similar check right at the beginning
to avoid useless processing possibly leading to an error afterwards:
see http://bugs.gnu.org/11816 with the commit
http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=commit;h=44fbd3fd862e34d42006f8b74cb11c9c56346417


Have a nice day,
Berny





bug#12366: [gnu-prog-discuss] Writing unwritable files

2012-09-06 Thread Jim Meyering
Paolo Bonzini wrote:
> [For bug-coreutils: the context here is that sed -i, just like perl -i,
> breaks hard links and thus destroys the content of files with 0400
> permission].

Did I misunderstand how "destroy" is used above?

$ echo important > k
$ chmod a-w k
$ sed -i s/./X/ k
$ cat k
X
$ ls -og k
-r. 1 10 Sep  6 15:23 k





bug#12366: [gnu-prog-discuss] Writing unwritable files

2012-09-06 Thread Paolo Bonzini
Il 06/09/2012 18:11, Paul Eggert ha scritto:
>> > I consider "shuf foo -o foo" (on a read-write file) to be insecure.
>> > Besides, it works by chance
> It's not by chance.  shuf is designed to let you
> shuffle a file in-place, and is documented to work,
> by analogy with "sort -o foo foo".  If we ever
> change "shuf" to use mmap, we'll make
> sure it continues to work in-place.

Yeah, I read that from Padraig.  I stand corrected.

> I'm not sure what is meant by "insecure" here.
> Of course there are race conditions if other
> processes modify a file when "shuf"
> reads or writes it, but that's true for pretty
> much any program that reads or writes any file,
> including sed -i.

No, unlink/rename "sed -i" replaces the file atomically.  A program that
reads the target file will never be able to observe an intermediate
result.  This is not true of "shuf -o foo foo".

(In addition, the temporary file for "sed -i" is opened with 0400
permissions for the user running sed, and will not have the same
owner/group/ACL/context as the target file until just before renaming to
the destination).

It's mostly paranoia, but the race window _is_ there unless you use
rename and break hard links.

Paolo





bug#12366: [gnu-prog-discuss] Writing unwritable files

2012-09-06 Thread Paul Eggert
On 09/06/2012 09:24 AM, Paolo Bonzini wrote:
> A program that reads the target file will never
> be able to observe an intermediate result. 

Sure, but that doesn't fix the race condition I
mentioned.  If some other process is writing F
while I run 'sed -i F', F is not replaced atomically.
That's true even if the other process is another
instance of 'sed'.

While 'sed -i' solves some race conditions,
it doesn't even come close to solving them all.
Fixing this problem in general is above
sed's pay grade, just as it's above shuf's.





bug#12366: [gnu-prog-discuss] Writing unwritable files

2012-09-06 Thread Paolo Bonzini
Il 06/09/2012 18:35, Paul Eggert ha scritto:
>> > A program that reads the target file will never
>> > be able to observe an intermediate result. 
> Sure, but that doesn't fix the race condition I
> mentioned.  If some other process is writing F
> while I run 'sed -i F', F is not replaced atomically.

How not so?

Paolo

> That's true even if the other process is another
> instance of 'sed'.






bug#12339: Bug: rm -fr . doesn't dir depth first deletion yet it is documented to do so.

2012-09-06 Thread Bob Proulx
Jim Meyering wrote:
> Linda Walsh wrote:
> ...
> > GNU needs to be clear their priorities -- maintaining software
> > freedom, or bowing down to corporate powers...  POSIX isn't
> 
> While POSIX is in general a very good baseline, no one here conforms
> blindly.  If POSIX is wrong, we'll lobby to change it, or, when
> that fails, maybe relegate the undesirable required behavior to when
> POSIXLY_CORRECT is set, or even simply ignore it.  In fact, over the
> years, I have deliberately made a few GNU tools contravene some aspects
> of POSIX-specified behavior that I felt were counterproductive.
> 
> We try to make the tools as useful as possible, sometimes adding features
> when we deem them worthwhile.  However, we are very much against changing
> the *default* behavior (behavior that has been that way for over 20
> years and that is compatible with all other vendor-supplied rm programs)
> without a very good reason.

Because I originally voted that this felt like a bug I wanted to state
that after determining that this has already been legacy system
historical practice for a very long time that I wouldn't change it
now.  Portability of applications is more important.

This isn't a feature that could be working in a script for someone.
It isn't something that was recently removed that would cause a script
to break.  A script will run now with the same behavior across
multiple different types of systems.  I think we should leave things
unchanged.

Bob





bug#12365: Should cp -n return 0, when DEST exists?

2012-09-06 Thread Bernhard Voelker
On September 6, 2012 at 5:24 PM Jim Meyering  wrote:

> Bernhard Voelker wrote:
> > Maybe it's worth adding a line about the exist status
> > when using -n or -i (together with answering 'n')?
>
> Yes, please.  That would be an improvement.

(my first patch created on cygwin)


>From 65d2d16340cee38f0a7e059af86be49f21eef84d Mon Sep 17 00:00:00 2001
From: Bernhard Voelker 
Date: Thu, 6 Sep 2012 18:39:47 +0200
Subject: [PATCH] doc: improve documentation of -n and -i for cp and mv
* doc/coreutils.texi (cp invocation): Add a note about to the
description of the option -n how it compares to -i. Add a note
about a possible exit status zero when -n or -i is used.
(mv invocation): Likewise.
---
doc/coreutils.texi | 25 +++--
1 files changed, 23 insertions(+), 2 deletions(-)
diff --git a/doc/coreutils.texi b/doc/coreutils.texi
index f2620bc..0324d1b 100644
--- a/doc/coreutils.texi
+++ b/doc/coreutils.texi
@@ -7772,8 +7772,14 @@ a regular file in the destination tree.
@opindex -n
@opindex --no-clobber
Do not overwrite an existing file. The @option{-n} option overrides a previous
-@option{-i} option. This option is mutually exclusive with @option{-b} or
-@option{--backup} option.
+@option{-i} option.
+@macro optClobberAndI
+In effect, this option works as if the option @option{-i}
+was given and the user declined all questions to overwrite the targets (sure
+enough without prompting).
+@end macro
+@optClobberAndI
+This option is mutually exclusive with @option{-b} or @option{--backup} option.

@item -P
@itemx --no-dereference
@@ -8026,8 +8032,15 @@ However, mount point directories @emph{are} copied.

@end table

+@cindex exit status of @command{cp}
+Exit status:
+
@exitstatus

+However, if the existing target is not overwritten because the option
+@option{-n} is used or the option @option{-i} is used and the user has declined
+overwriting that file, then @command{cp}'s exit status is yet zero.
+

@node dd invocation
@section @command{dd}: Convert and copy a file
@@ -8747,6 +8760,7 @@ If the response is not affirmative, the file is skipped.
@cindex prompts, omitting
Do not overwrite an existing file.
@mvOptsIfn
+@optClobberAndI
This option is mutually exclusive with @option{-b} or @option{--backup} option.

@item -u
@@ -8778,8 +8792,15 @@ Print the name of each file before moving it.

@end table

+@cindex exit status of @command{mv}
+Exit status:
+
@exitstatus

+However, if the existing target is not overwritten because the option
+@option{-n} is used or the option @option{-i} is used and the user has declined
+overwriting that file, then @command{mv}'s exit status is yet zero.
+

@node rm invocation
@section @command{rm}: Remove files or directories
--
1.7.9






bug#12366: [gnu-prog-discuss] Writing unwritable files

2012-09-06 Thread Bob Proulx
Paul Eggert wrote:
> >> If some other process is writing F
> >> while I run 'sed -i F', F is not replaced atomically.
> 
> > How not so?
> 
> For example:
> 
> echo ac >f
> sed -i 's/a/b/' f &
> sed -i 's/c/d/' f
> wait
> cat f
> 
> If 'sed' were truly atomic, then the output of this would
> always be 'bd'.  But it's not.

The file replacement is atomic.  The reading of the file is not.

Bob





bug#12366: [gnu-prog-discuss] Writing unwritable files

2012-09-06 Thread Paul Eggert
On 09/06/2012 10:12 AM, Bob Proulx wrote:
> The file replacement is atomic.  The reading of the file is not.

Sure, but the point is that from the end user's
point of view, 'sed -i' is not atomic, and can't
be expected to be atomic.

'sed -i' and 'sort -o' both use some atomic operations
internally, but neither is atomic overall.  Users who
want atomicity must look elsewhere, or implement it
themselves.





bug#12366: [gnu-prog-discuss] Writing unwritable files

2012-09-06 Thread Bernhard Voelker

On September 6, 2012 at 7:23 PM Paul Eggert  wrote:

> On 09/06/2012 10:12 AM, Bob Proulx wrote:
> > The file replacement is atomic.  The reading of the file is not.
>
> Sure, but the point is that from the end user's
> point of view, 'sed -i' is not atomic, and can't
> be expected to be atomic.
>
> 'sed -i' and 'sort -o' both use some atomic operations
> internally, but neither is atomic overall.  Users who
> want atomicity must look elsewhere, or implement it
> themselves.

Why can't 'sed -i' be made atomic for the user?
Today, it creates a temporary file for the output.
At the end, it calls rename(). What if it instead
rewinds the input and that temporary file and copies
it's content to the input file?
Okay, this is slower than a rename(), but it would
write into the same inode. To preserve today's behaviour,
this could be done with a new option like --in-place-same.

Just a thought.

Have a nice day,
Berny





bug#12339: Bug: rm -fr . doesn't dir depth first deletion yet it is documented to do so.

2012-09-06 Thread Linda Walsh



Bernhard Voelker wrote:



On September 6, 2012 at 12:56 PM Linda Walsh  wrote:


Jim Meyering wrote:

Thanks for the patch, but it would be pretty rotten for GNU rm to make
it so "rm -rf ." deletes everything under ".", while all other vendor
rm programs diagnose the POSIX-mandated error.  People would curse us
for making GNU rm remove their precious files when they accidentally ran
that command.

---

Just like people who ran "rm -fr * in /" and didn't get their POSIX
mandated behavior, would curse you?

You are playing Mommy, to people and not allowing them to do what
they are asking the computer to do.

[... and ~40 lines re. Jim, GNU, POSIX, the universe ...]


Dear Linda,

why don't you stick to the point?


I wasn't the one who raised the point of people
cursing Gnu for removing their precious when they accidently or deliberately
tried to invoked rm in a way to generate a non-functional behavior.

If we were going to talk about them cursing gnu, I thought
I would fully put it in perspective.

That's what that exposé was about.

Note -- that it wasn't personally directly, and included listed
facts for a stronger counterpoint to what, admittedly, was likely a lightly
given reason for not changing a default behavior.  It was addressing
that comment, alone.

You want an on-point counter proposal:

Might I suggest this as a rational counter proposal.


If the user issues rm -r ., it issues a warning:

"Do you really wish to remove all files under '.'"?

That won't break compatible behavior.  Only if the user chooses
the non-default 'force' option "do what I say and shut up", which is not
a default option", would it do the action I suggest.

In any case, if POSIX_CORRECTLY is set, it will act as per POSIX 
requirements.

It is TELLING, and important to understand Jim's statement
" Very few people ever set that envvar."  Most people don't want strict POSIX
compatbility -- for reasons exactly like this -- the POSIX isn't about 
usability, it's

about program portability.  So for interactive use, it wouldn't be something 
most
people would want to use.









bug#12350: Composites identified as primes in factor.c (when HAVE_GMP)

2012-09-06 Thread Torbjorn Granlund
I and Niels now would appreciate feedback on the new factor code.

We've put the entire little project in a tar file, which is attached.
The code is also available at .

Here is the README file:

NT factor  (Niels' and Torbjörn's factor, or New Technology factor)

This is a project for producing a decent 'factor' command for GNU.
The code was written by Torbjörn Granlund and Niels Möller in Aug-Sept
2012, but parts of the code is based on preexisting GMP code.

The old factor program could handle numbers < 2^64, unless GMP was
available at build time.  Without GMP, only trial division was used;
with GMP an old version of GMP's demos/factorize.c code was used,
which relies on Pollard rho for much better performance.  The old
factor program used probabilistic Miller-Rabin primes testing.

The new code can factor numbers < 2^127 and does not currently make
use of GMP, not even as an option.  It uses fast trial division code,
Pollard rho, and optionally SQUFOF.  It uses the Lucas primality test
instead of a probabilistic test.

The new code is several times faster then the old code, in particular
on 32-bit hardware.  On current 64-bit machines, it is between 3 times
and 1 times faster for ranges of numbers; for 32-bit machines we
have seen 150,000 times improvement for some number range.  The
advantage for the new code is greater for larger numbers, matching
mathematical theory of algorithm efficiency.  (These numbers compare
the new code to the old GMP-less code; GMP-enabled old code is only
between 3 and 10 times slower.)

For smaller numbers, more than half the time is spent in I/O,
buffering, and conversions.  We have not tried to optimise these
parts, but instead kept them clean.


* We don't have any --help or --version options currently.

* Our packaging with separate Makefile, outseq.c and ChangeLog was
  useful during our development.  We don't expect these to be useful
  in coreutils.  In particular, the slow testing of the 'check' target
  is probably quite unsuitable for coreutils (but similar but quicker
  tests would make sense).

* The files probably needed for coreutils are:

  o factor.c -- main factoring code
  o make-prime-list.c -- primes table generator program
  o longlong.h -- arithmetic support routines (from GMP)


Technical considerations:

* Should we handle numbers >= 2^127?  That would in effect mean
  merging a current version of GMP's demos/factorize.c into this
  factor.c, and put that under HAVE_GMP (like in the old factor.c).
  It should be understood that factoring such large numbers with only
  Pollard rho is not very feasible.

* We think a --verbose option would be nice, although we don't have
  one in the present version.  It would output information on
  algorithm switches and bounds reached.  Opinions?


Portability caveats:

* We rely on POSIX.1 getchar_unlocked for a performance advantage.

* We have some hardwired W_TYPE_SIZE settings for the code interfacing
  to longlong.h.  It is now 64 bits.  It will break on systems where
  uintmax_t is not a 64-bit type.  Please see the beginning of
  factor.c.


Legal caveat:

* Both Niels and Torbjörn are GNU hackers since long.  We do not
  currently have paperwork in place for coreutils contributions.  This
  will certainly be addressed.




nt-factor.tar.lz
Description: Binary data


-- 
Torbjörn


bug#12350: Composites identified as primes in factor.c (when HAVE_GMP)

2012-09-06 Thread Torbjorn Granlund
I and Niels now would appreciate feedback on the new factor code.

We've put the entire little project in a tar file, which is attached.
The code is also available at .

Here is the README file:

NT factor  (Niels' and Torbjörn's factor, or New Technology factor)

This is a project for producing a decent 'factor' command for GNU.
The code was written by Torbjörn Granlund and Niels Möller in Aug-Sept
2012, but parts of the code is based on preexisting GMP code.

The old factor program could handle numbers < 2^64, unless GMP was
available at build time.  Without GMP, only trial division was used;
with GMP an old version of GMP's demos/factorize.c code was used,
which relies on Pollard rho for much better performance.  The old
factor program used probabilistic Miller-Rabin primes testing.

The new code can factor numbers < 2^127 and does not currently make
use of GMP, not even as an option.  It uses fast trial division code,
Pollard rho, and optionally SQUFOF.  It uses the Lucas primality test
instead of a probabilistic test.

The new code is several times faster then the old code, in particular
on 32-bit hardware.  On current 64-bit machines, it is between 3 times
and 1 times faster for ranges of numbers; for 32-bit machines we
have seen 150,000 times improvement for some number range.  The
advantage for the new code is greater for larger numbers, matching
mathematical theory of algorithm efficiency.  (These numbers compare
the new code to the old GMP-less code; GMP-enabled old code is only
between 3 and 10 times slower.)

For smaller numbers, more than half the time is spent in I/O,
buffering, and conversions.  We have not tried to optimise these
parts, but instead kept them clean.


* We don't have any --help or --version options currently.

* Our packaging with separate Makefile, outseq.c and ChangeLog was
  useful during our development.  We don't expect these to be useful
  in coreutils.  In particular, the slow testing of the 'check' target
  is probably quite unsuitable for coreutils (but similar but quicker
  tests would make sense).

* The files probably needed for coreutils are:

  o factor.c -- main factoring code
  o make-prime-list.c -- primes table generator program
  o longlong.h -- arithmetic support routines (from GMP)


Technical considerations:

* Should we handle numbers >= 2^127?  That would in effect mean
  merging a current version of GMP's demos/factorize.c into this
  factor.c, and put that under HAVE_GMP (like in the old factor.c).
  It should be understood that factoring such large numbers with only
  Pollard rho is not very feasible.

* We think a --verbose option would be nice, although we don't have
  one in the present version.  It would output information on
  algorithm switches and bounds reached.  Opinions?


Portability caveats:

* We rely on POSIX.1 getchar_unlocked for a performance advantage.

* We have some hardwired W_TYPE_SIZE settings for the code interfacing
  to longlong.h.  It is now 64 bits.  It will break on systems where
  uintmax_t is not a 64-bit type.  Please see the beginning of
  factor.c.


Legal caveat:

* Both Niels and Torbjörn are GNU hackers since long.  We do not
  currently have paperwork in place for coreutils contributions.  This
  will certainly be addressed.




nt-factor.tar.lz
Description: Binary data


-- 
Torbjörn


bug#12366: [gnu-prog-discuss] bug#12366: Writing unwritable files

2012-09-06 Thread Bob Friesenhahn

On Thu, 6 Sep 2012, Paolo Bonzini wrote:



I'm not sure what is meant by "insecure" here.
Of course there are race conditions if other
processes modify a file when "shuf"
reads or writes it, but that's true for pretty
much any program that reads or writes any file,
including sed -i.


No, unlink/rename "sed -i" replaces the file atomically.  A program that


POSIX rename assures that the destination path always exists if it 
already existed.  If unlink/ln was used, then the destination path 
would temporarily be missing.  While 'rename' is occuring, a second 
(parallel) reader/writer has no idea which version will be accessed.


Microsoft Windows and other operating systems might not support the 
POSIX sematic.


Certain filesystems (or their implementation) might not support atomic 
'rename'.



It's mostly paranoia, but the race window _is_ there unless you use
rename and break hard links.


Yes, you must use rename, and rename would need to work as per the 
POSIX specification.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/





bug#12366: [gnu-prog-discuss] Writing unwritable files

2012-09-06 Thread Paul Eggert
On 09/06/2012 10:35 AM, Bernhard Voelker wrote:
> Why can't 'sed -i' be made atomic for the user?
> Today, it creates a temporary file for the output.
> At the end, it calls rename(). What if it instead
> rewinds the input and that temporary file and copies
> it's content to the input file?

That's kind of what 'sort -o' does, and it also
has race conditions.  For example, in that last phase
while it's copying the content to the input file, some other
process might be reading the input file.

There is no good general and portable atomic solution to
this sort of problem, not in POSIX anyway.  Practical
implementations of utilities like 'sed' and 'sort'
and 'shuf' all involve races of some sort or another.





bug#12366: [gnu-prog-discuss] bug#12366: Writing unwritable files

2012-09-06 Thread John Darrington
On Thu, Sep 06, 2012 at 11:23:21AM -0700, Paul Eggert wrote:
 On 09/06/2012 10:35 AM, Bernhard Voelker wrote:
 > Why can't 'sed -i' be made atomic for the user?
 > Today, it creates a temporary file for the output.
 > At the end, it calls rename(). What if it instead
 > rewinds the input and that temporary file and copies
 > it's content to the input file?
 
 That's kind of what 'sort -o' does, and it also
 has race conditions.  For example, in that last phase
 while it's copying the content to the input file, some other
 process might be reading the input file.

I don't think that matters. In fact I like to be able to use 
tail -f to see what's being written to a file, and find it
the mozilla like behaviour, where I have to wait until the 
entire file is downloaded in order to see the first byte, 
rather annoying.
 
J'

-- 
PGP Public key ID: 1024D/2DE827B3 
fingerprint = 8797 A26D 0854 2EAB 0285  A290 8A67 719C 2DE8 27B3
See http://keys.gnupg.net or any PGP keyserver for public key.



signature.asc
Description: Digital signature


bug#12350: Composites identified as primes in factor.c (when HAVE_GMP)

2012-09-06 Thread Jim Meyering
Torbjorn Granlund wrote:
> I and Niels now would appreciate feedback on the new factor code.
>
> We've put the entire little project in a tar file, which is attached.
> The code is also available at .

Thanks a lot!  I've started looking at the code.
I was surprised to see "make check" fail.

$ ./ourseq 0 10 > k 
 :
$ ./factor < k  
 :
0:
1:
2: 2
3: 3
4: 2 2
5: 5
6: 2 3
7: 7
8: 2 2 2
9: 3 3
zsh: abort (core dumped)  ./factor < k

That was due to unexpected input.
Poking around, I see that ourseq writes from uninitialized memory:

$ ./ourseq 9 11
9
102
112
$ ./ourseq 9 11
9
10>
11>
$ ./ourseq 9 11
9
10"
11"

The fix is to change the memmove to copy one more byte each time:
to copy the required trailing NUL.
With that, it looks like "make check" will pass.
It will definitely benefit from running the individual
tests in parallel ;-)

>From 9e6db73344f43e828b8d716a0ea6a5842980d518 Mon Sep 17 00:00:00 2001
From: Jim Meyering 
Date: Thu, 6 Sep 2012 22:12:41 +0200
Subject: [PATCH] incr: don't omit trailing NUL when incrementing

---
 ourseq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ourseq.c b/ourseq.c
index d2472aa..cb71f13 100644
--- a/ourseq.c
+++ b/ourseq.c
@@ -48,7 +48,7 @@ incr (string *st)
}
   s[i] = '0';
 }
-  memmove (s + 1, s, len);
+  memmove (s + 1, s, len + 1);
   s[0] = '1';
   st->len = len + 1;
 }
--
1.7.12.176.g3fc0e4c





bug#12350: Composites identified as primes in factor.c (when HAVE_GMP)

2012-09-06 Thread Torbjorn Granlund
Jim Meyering  writes:

  zsh: abort (core dumped)  ./factor < k
  
  That was due to unexpected input.

The parsing of the new factor is probably not too bad, but the error
reporting could be better.  :o)

  --- a/ourseq.c
  +++ b/ourseq.c
  @@ -48,7 +48,7 @@ incr (string *st)
}
 s[i] = '0';
   }
  -  memmove (s + 1, s, len);
  +  memmove (s + 1, s, len + 1);
 s[0] = '1';
 st->len = len + 1;
   }

Thanks.  Culpa mea, ourseq is not my finest work, just a hack for
testing factor.
  

-- 
Torbjörn





bug#12366: [gnu-prog-discuss] bug#12366: Writing unwritable files

2012-09-06 Thread Paolo Bonzini
Il 06/09/2012 20:21, Bob Friesenhahn ha scritto:
>>
>> No, unlink/rename "sed -i" replaces the file atomically.  A program that
> 
> POSIX rename assures that the destination path always exists if it
> already existed.

My bad, I meant link-breaking/rename.  Of course you must not unlink first.

Paolo





bug#12350: Composites identified as primes in factor.c (when HAVE_GMP)

2012-09-06 Thread Jim Meyering
Torbjorn Granlund wrote:
> I and Niels now would appreciate feedback on the new factor code.
...
> * Our packaging with separate Makefile, outseq.c and ChangeLog was
>   useful during our development.  We don't expect these to be useful
>   in coreutils.  In particular, the slow testing of the 'check' target
>   is probably quite unsuitable for coreutils (but similar but quicker
>   tests would make sense).

I think the tests will be fine, as long as they're separate, and hence
can be parallelized by the default mechanism.  We might want to label
most of them as "expensive", so that they're run only by those who set
RUN_EXPENSIVE_TESTS=yes in their environment.

> * The files probably needed for coreutils are:
>
>   o factor.c -- main factoring code
>   o make-prime-list.c -- primes table generator program
>   o longlong.h -- arithmetic support routines (from GMP)
>
>
> Technical considerations:
>
> * Should we handle numbers >= 2^127?  That would in effect mean
>   merging a current version of GMP's demos/factorize.c into this
>   factor.c, and put that under HAVE_GMP (like in the old factor.c).
>   It should be understood that factoring such large numbers with only
>   Pollard rho is not very feasible.

The existing code can factor arbitrarily large numbers quickly, as long
as they have no large prime factors.  We should retain that capability.

> * We think a --verbose option would be nice, although we don't have
>   one in the present version.  It would output information on
>   algorithm switches and bounds reached.  Opinions?

I think it would be worthwhile, especially to give an idea of what progress
is being made when factoring very large numbers, but hardly something
that need be done now.

E.g., currently this doesn't print much:

$ M8=$(echo 2^31-1|bc) M9=$(echo 2^61-1|bc) M10=$(echo 2^89-1|bc)
$ factor --verbose $(echo "$M8 * $M9 * $M10" | bc)
[using arbitrary-precision arithmetic][trial division (32761)] [is number 
prime?] [pollard-rho (1)]

Ideally it'd print something every second or two.

> Portability caveats:
>
> * We rely on POSIX.1 getchar_unlocked for a performance advantage.
>
> * We have some hardwired W_TYPE_SIZE settings for the code interfacing
>   to longlong.h.  It is now 64 bits.  It will break on systems where
>   uintmax_t is not a 64-bit type.  Please see the beginning of
>   factor.c.

I wonder how many types of systems would be affected.

> Legal caveat:
>
> * Both Niels and Torbjörn are GNU hackers since long.  We do not
>   currently have paperwork in place for coreutils contributions.  This
>   will certainly be addressed.

Thanks.





bug#12350: Composites identified as primes in factor.c (when HAVE_GMP)

2012-09-06 Thread Torbjorn Granlund
Jim Meyering  writes:

  The existing code can factor arbitrarily large numbers quickly, as long
  as they have no large prime factors.  We should retain that capability.
  
OK, so we'll put the GMP demos program into this one.

This opens another technical concern:

We have moved towards proving primality, since for 128 bit numbers it
can be done quickly.  But if we allow arbitrary large numbers, it is
expensive.

We might want an option for this choosing probabilistic testing, perhaps
--prp (common abbreviation for PRobabilistic Prime).  By default, we
should prove primality, I think.

My current devel version if GMP's demos/factorize has Lucas code.

  > * We think a --verbose option would be nice, although we don't have
  >   one in the present version.  It would output information on
  >   algorithm switches and bounds reached.  Opinions?
  
  I think it would be worthwhile, especially to give an idea of what progress
  is being made when factoring very large numbers, but hardly something
  that need be done now.
  
  E.g., currently this doesn't print much:
  
  $ M8=$(echo 2^31-1|bc) M9=$(echo 2^61-1|bc) M10=$(echo 2^89-1|bc)
  $ factor --verbose $(echo "$M8 * $M9 * $M10" | bc)
  [using arbitrary-precision arithmetic][trial division (32761)] [is number 
prime?] [pollard-rho (1)]
  
  Ideally it'd print something every second or two.
  
I'll let Niels worry about this, since he was the one to ask for it.

  > Portability caveats:
  >
  > * We rely on POSIX.1 getchar_unlocked for a performance advantage.
  >
  > * We have some hardwired W_TYPE_SIZE settings for the code interfacing
  >   to longlong.h.  It is now 64 bits.  It will break on systems where
  >   uintmax_t is not a 64-bit type.  Please see the beginning of
  >   factor.c.
  
  I wonder how many types of systems would be affected.
  
It is not used currently anywhere in coreutils?  Perhaps coreutils could
use autoconf for checking this?  (If we're really crazy, we could speed
the factor program by an additional 20% by using blocked input with
e.g. fread.)

Please take a look at the generated code for factor_using_division,
towards the end where 8 imulq should be found (on amd64).  The code uses
mov, imul, cmp, jbe for testing the divisibility of a prime; the branch
is taken when the prime divides the number being factored, thus highly
non-taken.  (I suppose we could do a better job at describing the maths,
with some references.  This particular trick is from "Division by
invariant integers using multiplication".)

-- 
Torbjörn





bug#12350: Composites identified as primes in factor.c (when HAVE_GMP)

2012-09-06 Thread Paul Eggert
On 09/06/2012 02:33 PM, Jim Meyering wrote:
>> > * We have some hardwired W_TYPE_SIZE settings for the code interfacing
>> >   to longlong.h.  It is now 64 bits.  It will break on systems where
>> >   uintmax_t is not a 64-bit type.  Please see the beginning of
>> >   factor.c.
> I wonder how many types of systems would be affected.

It's only a matter of time.  GCC already supports 128-bit
integers on my everyday host (Fedora 17, x86-64, GCC 4.7.1).
Eventually uintmax_t will grow past 64 bits, if only for the
crypto guys.

If the code needs exactly-64-bit unsigned integers, shouldn't
it be using uint64_t?  That's the standard way of doing
that sort of thing.  Gnulib can supply the type on pre-C99
platforms.  Weird but standard-conforming platforms that
don't have uint64_t will be out of luck, but surely they're out
of luck anyway.





bug#12350: Composites identified as primes in factor.c (when HAVE_GMP)

2012-09-06 Thread Jim Meyering
Jim Meyering wrote:
> Jim Meyering wrote:
>
>> Torbjorn Granlund wrote:
>>> The very old factoring code cut from an now obsolete version GMP does
>>> not pass proper arguments to the mpz_probab_prime_p function.  It ask
>>> for 3 Miller-Rabin tests only, which is not sufficient.
>>
>> Hi Torbjorn
>>
>> Thank you for the patch and explanation.
>> I've converted that into the commit below in your name.
>> Please proofread it and let me know if you'd like to change anything.
>> I tweaked the patch to change MR_REPS from a #define to an enum
>> and to add the comment just preceding.
>>
>> I'll add NEWS and tests separately.
> ...
>> From: Torbjorn Granlund 
>> Date: Tue, 4 Sep 2012 16:22:47 +0200
>> Subject: [PATCH] factor: don't ever declare composites to be prime
>
> Torbjörn, I've just noticed that I misspelled your name above.
>
> Here's the NEWS/tests addition.
> Following is an adjusted commit that spells your name properly.
>
>>From e561ff991b74dc19f6728aa1e6e61d1927055ac1 Mon Sep 17 00:00:00 2001

There have been enough changes (mostly typo fixes) that I'm re-posting
these for review before I push.  Also, I added this sentence to NEWS
about the performance hit, too

The fix makes factor somewhat slower (~25%) for ranges of consecutive
numbers, and up to 8 times slower for some worst-case individual numbers.


>From 68cf62bb04ecd138c81b68539c2a065250ca4390 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Torbj=C3=B6rn=20Granlund?= 
Date: Tue, 4 Sep 2012 18:38:29 +0200
Subject: [PATCH 1/2] factor: don't ever declare composites to be prime

The multiple-precision factoring code (with HAVE_GMP) was copied from
a now-obsolete version of GMP that did not pass proper arguments to
the mpz_probab_prime_p function.  It makes that code perform no more
than 3 Miller-Rabin tests only, which is not sufficient.

A Miller-Rabin test will detect composites with at least a probability
of 3/4.  For a uniform random composite, the probability will actually
be much higher.

Or put another way, of the N-3 possible Miller-Rabin tests for checking
the composite N, there is no number N for which more than (N-3)/4 of the
tests will fail to detect the number as a composite.  For most numbers N
the number of "false witnesses" will be much, much lower.

Problem numbers are of the form N=pq, p,q prime and (p-1)/(q-1) = s,
where s is a small integer.  (There are other problem forms too,
involving 3 or more prime factors.)  When s = 2, we get the 3/4 factor.

It is easy to find numbers of that form that cause coreutils' factor to
fail:

  465658903
  2242724851
  6635692801
  17709149503
  17754345703
  20889169003
  42743470771
  54890944111
  72047131003
  85862644003
  98275842811
  114654168091
  117225546301
  ...

There are 9008992 composites of the form with s=2 below 2^64.  With 3
Miller-Rabin tests, one would expect about 9008992/64 = 140766 to be
invalidly recognized as primes in that range.

* src/factor.c (MR_REPS): Define to 25.
(factor_using_pollard_rho): Use MR_REPS, not 3.
(print_factors_multi): Likewise.
* THANKS.in: Remove my name, now that it will be automatically
included in the generated THANKS file.
---
 THANKS.in| 1 -
 src/factor.c | 9 ++---
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/THANKS.in b/THANKS.in
index 1580151..2c3f83c 100644
--- a/THANKS.in
+++ b/THANKS.in
@@ -608,7 +608,6 @@ Tony Leneis t...@plaza.ds.adp.com
 Tony Robinson   a...@eng.cam.ac.uk
 Toomas Soometoomas.so...@elion.ee
 Toralf Förster  toralf.foers...@gmx.de
-Torbjorn Granlund   t...@nada.kth.se
 Torbjorn Lindgren   t...@funcom.no
 Torsten Landschoff  tors...@pclab.ifg.uni-kiel.de
 Travis Gummels  tgumm...@redhat.com
diff --git a/src/factor.c b/src/factor.c
index 1d55805..e63e0e0 100644
--- a/src/factor.c
+++ b/src/factor.c
@@ -153,6 +153,9 @@ factor_using_division (mpz_t t, unsigned int limit)
   mpz_clear (r);
 }

+/* The number of Miller-Rabin tests we require.  */
+enum { MR_REPS = 25 };
+
 static void
 factor_using_pollard_rho (mpz_t n, int a_int)
 {
@@ -222,7 +225,7 @@ S4:

   mpz_div (n, n, g);   /* divide by g, before g is overwritten */

-  if (!mpz_probab_prime_p (g, 3))
+  if (!mpz_probab_prime_p (g, MR_REPS))
 {
   do
 {
@@ -242,7 +245,7 @@ S4:
   mpz_mod (x, x, n);
   mpz_mod (x1, x1, n);
   mpz_mod (y, y, n);
-  if (mpz_probab_prime_p (n, 3))
+  if (mpz_probab_prime_p (n, MR_REPS))
 {
   emit_factor (n);
   break;
@@ -411,7 +414,7 @@ print_factors_multi (mpz_t t)
   if (mpz_cmp_ui (t, 1) != 0)
 {
   debug ("[is number prime?] ");
-  if (mpz_probab_prime_p (t, 3))
+  if (mpz_probab_prime_p (t, MR_REPS))
 emit_factor (t);
   else
 factor_using_pollard_rho (t, 1);
--
1.7.12.176.g3fc0e4c


>From 0

bug#12365: closed (Re: Should cp -n return 0, when DEST exists?)

2012-09-06 Thread Anoop Sharma
On Thu, Sep 6, 2012 at 8:03 PM, GNU bug Tracking System
 wrote:
> Your bug report
>
> #12365: Incorrect return value of cp with no-clobber option
>
> which was filed against the coreutils package, has been closed.
>
> The explanation is attached below, along with your original report.
> If you require more details, please reply to 12...@debbugs.gnu.org.
>
> --
> 12365: http://debbugs.gnu.org/cgi/bugreport.cgi?bug=12365
> GNU Bug Tracking System
> Contact help-debb...@gnu.org with problems
>
>
> -- Forwarded message --
> From: Eric Blake 
> To: Anoop Sharma 
> Cc: 12365-d...@debbugs.gnu.org, coreutils 
> Date: Thu, 06 Sep 2012 08:31:53 -0600
> Subject: Re: Should cp -n return 0, when DEST exists?
> tag 12365 wontfix
> thanks
>
> On 09/06/2012 04:50 AM, Anoop Sharma wrote:
>> When -n option of cp is used and the DEST file exists, then, as
>> expected, cp is not able to copy the SOURCE to DEST.
>>
>> However, cp returns 0 in this case.
>
> cp -n is not mandated by POSIX, so we are free to do as we wish here.
> But looking at history, we added -n for coreutils 7.1 in Feb 2009, and
> the mail from that thread includes:
> https://lists.gnu.org/archive/html/bug-coreutils/2008-12/msg00159.html
>
> which states we are modeling after FreeBSD.  A quick check on my FreeBSD
> 8.2 VM shows:
>
> $ echo one > bar
> $ echo two > blah
> $ cp -n blah bar
> $ echo $?
> 0
> $ cat bar
> one
>
> that FreeBSD also returns 0 in this case, and I don't want to break
> interoperability.  Therefore, I'm going to close this as a WONTFIX,
> unless you also get buy-in from the BSD folks.
>
> By the way, there's no need to post three separate emails with the same
> contents, without first waiting at least 24 hours.  Like most other
> moderated GNU lists, you do not have to be a subscriber to post, and
> even if you are a subscriber, your first post to a given list will be
> held in a moderation queue for as long as it takes for a human to
> approve your email address as a non-spammer for all future posts
> (generally less than a day).
>
> --
> Eric Blake   ebl...@redhat.com+1-919-301-3266
> Libvirt virtualization library http://libvirt.org
>
>
>
> -- Forwarded message --
> From: Anoop Sharma 
> To: bug-coreutils@gnu.org
> Cc:
> Date: Thu, 6 Sep 2012 17:27:52 +0530
> Subject: Incorrect return value of cp with no-clobber option
> When -n (--no-clobber) option of cp is used and the DEST file exists, then, as
> expected, cp is not able to copy the SOURCE to DEST.
>
> However, cp returns 0 in this case.
>
> Shouldn't it return 1 to indicate that copy operation could not be completed?
>
> In absence of this indication how is one to know that some recovery
> action like re-trying cp with some other DEST name is required?
>
> Regards,
> Anoop
>
>
>


Thank you, Eric.

I am a newbie to open source development tools and processes. I had
posted earlier to bug-coreutils@gnu.org and had got an acknowledgement
mail immediately.

Subsequently, I subscribed to coreut...@gnu.org and have now been
subscribed for more than a month. I originally posted this mail to
that list for discussion. However, there was no acknowledgement from
there and I mistakenly assumed that some spam filter is stopping my
mails from reaching the list. Therefore, I tweaked the text a bit in
an attempt to get past the spam filter and tried multiple times.
Finally, as a work-around, I posted to bug-coreutils@gnu.org and
stopped thereafter, because I got an ack again!

I will be more patient next time!

Thanks for educating,
Anoop