On Tue, May 14, 2024 at 3:35 AM Koichi Murase wrote:
>
> 2024年5月14日(火) 15:09 Martin D Kealey :
> > 1. I therefore propose that where a relative path appears in
> > BASH_SOURCE_PATH, it should be taken as relative to the directory
> > containing $0 (after resolving symlinks), rather than relative t
My OS is Mac OS X. I don't have perf. Is it only on linux? Could you
show me the output of your perf?
On 3/7/20, Chris Down wrote:
> Peng Yu writes:
>>Could you show me how you do the profiling for this specific case?
>>Based on what proof that you can conclud
Could you show me how you do the profiling for this specific case?
Based on what proof that you can conclude that it is not the `[[`
performance problem?
On 3/7/20, Chris Down wrote:
> Peng Yu writes:
>>[[ $x ]] just tests whether the variable $x is of length 0 or not. So
>>
[[ $x ]] just tests whether the variable $x is of length 0 or not. So
its performance should not depend on how long the variable is.
But the following test case shows that the run time does depend on the
length of the variable.
Should it be considered as a performance bug of bash?
$ x=$(printf '
See the following run time comparison. {1..100} is slower than
$(seq 100).
Since seq involves an external program, I'd expect the latter to be
slower. But the comparison shows the opposite.
I guess seq did some optimization?
Can the performance of {1..100} be improved so that it is f
Hi,
Bash uses the low 16 bits for $RANDOM.
https://git.savannah.gnu.org/cgit/bash.git/tree/variables.c#n1321
https://git.savannah.gnu.org/cgit/bash.git/tree/variables.c#n1356
It seems that the high bits should be more random. If so, maybe the
high 16 bits should be kept if $RANDOM must stay in 1
Hi,
yylex() still gives the token ARITH_CMD for the following command. The
error seems to be raised at the parsing stage. Shouldn't the error be
caught in the lexical analysis stage?
$ ((x = 10 + 5; ++x; echo $x))
bash: ((: x = 10 + 5; ++x: syntax error: invalid arithmetic operator
(error token i
Hi,
`echo {` treats `{` as WORD.
`{ echo; }` treats `{` as a token of `{`.
`{a` treats `{a` as a WORD.
I don't see the point why yylex() treat `{` context dependently.
Wouldn't it better just treat a bare `{` as a token of `{`?
What is the reasoning behind the current design of the syntax?
-
Hi,
I know that ASSIGNMENT_WORD in parse.y is for assignments like x=10.
But in the grammar rules. I don't see any difference between them in
terms of actions to take. Where is the code that deals with them
differently?
Also, why parse x=10 as a single token. Why not parse it as three
tokens "x"
On Fri, Feb 8, 2019 at 10:50 AM Chet Ramey wrote:
>
> On 2/8/19 10:52 AM, Peng Yu wrote:
> > On Fri, Feb 8, 2019 at 9:42 AM Chet Ramey wrote:
> >>
> >> On 2/8/19 10:39 AM, Peng Yu wrote:
> >>>> Yes: ocache_free.
> >>>
> >>&g
On Fri, Feb 8, 2019 at 9:42 AM Chet Ramey wrote:
>
> On 2/8/19 10:39 AM, Peng Yu wrote:
> >> Yes: ocache_free.
> >
> > Could you please help explain what wdcache and wlcache actually do.
> > Why is it essential to have them? Why not just alloc and free them
>
> Yes: ocache_free.
Could you please help explain what wdcache and wlcache actually do.
Why is it essential to have them? Why not just alloc and free them
without the caches? Thanks.
--
Regards,
Peng
Hi,
I don't understand the purpose of wdcache and wlcache. The "nc" field
seems to be always 0 (as initialized in ocache_create(), and I don't
find where it is increased. But `ocache_alloc()` just call xmalloc
without using the cache since nc is 0. So wdcache and wlcache seem to
be useless.
Do I
On Wed, Feb 6, 2019 at 4:49 PM Eric Blake wrote:
>
> On 2/6/19 4:18 PM, Peng Yu wrote:
> > Hi,
> >
> > I deleted the file parser-built, and bash still compiles and an empty
> > parser-built file will be generated upon compilation. What is the
> > purpose of thi
Hi,
yacc_EOF is mentioned in parse.y in something like this
%left '&' ';' '\n' yacc_EOF
| error yacc_EOF
But I don't find where it is defined similarly to other tokens like BAR_AND.
%token GREATER_BAR BAR_AND
Where is yacc_EOF defined?
(y.tab.c and y.tab.h are files generated by bison. so y
Hi,
I deleted the file parser-built, and bash still compiles and an empty
parser-built file will be generated upon compilation. What is the
purpose of this file? Should it be deleted? Thanks.
--
Regards,
Peng
> No, that is what volatile means. The register keyword is just an
> optimisation hint, and is mostly ignored by the compiler.
If it is ignored anyway, why "register" is used in many places in the
code? Thanks.
--
Regards,
Peng
Hi,
I see many variables are declared with the "register" keyword. I know
its purpose is to tell compile always access the corresponding memory
without assuming the previously accessed values are preserved. This is
usually to deal with some external devices.
But I don't understand why it is usefu
> That's a documentation convention - the all-caps in the docstring calls
> your attention to the need to search case-insensitively for the actual
> variable, while spelling it case-sensitively would make it blend into
> the sentence and make it harder to realize that the sentence is indeed
> point
> grep global_command *.?
GLOBAL_COMMAND is uppercase. But the actual variable name
global_command is in lowercase.
I think that GLOBAL_COMMAND should be changed to global_command in the comment.
--
Regards,
Peng
Hi,
GLOBAL_COMMAND is mentioned as a global variable. But I don't find it.
Is it renamed to something else?
eval.c
276-/* Call the YACC-generated parser and return the status of the parse.
277- Input is read from the current input stream (bash_input). yyparse
278: leaves the parsed command i
When I use the loadable cat, I may get the following error. The input
is a fifo in this specific case.
cat: cannot open /tmp/tmp.VXkbqFlPtH: Interrupted system call
So far, I can not make a minimal script to demonstrate the problem.
But if I replace it with coreutils cat in my code, the problem i
Hi,
I see these global or static variables (1st column) used only by one
function (2nd column). Some are from bash, some are from the libraries
that bash depends.
It seems to be problematic to declare variables global/static but only
to use them in one function. Should these variables be made loc
> The bash-4.4 code only worked the way you want it by chance. There was a
bug that was fixed in January, 2017, the result of
> http://lists.gnu.org/archive/html/bug-bash/2017-01/msg00018.html
> that uncovered the behavior you're complaining about.
This only explains where the change of behavior
See the following for the difference. I'd consider the behavior of
4.4.23 should be correct.
How was this bug introduced? Should there be a test case to cover this case?
$ cat main_debug.sh
#!/usr/bin/env bash
# vim: set noexpandtab tabstop=2:
echo $BASH_VERSION
declare -- null="@()"
declare --
>
>
> > https://pastebin.com/cV1jP41Y
>
> Really? What is your analysis? There are 100 duplicate global symbols
> shared between bash and other libraries? Or is it your assertion that
> one should never use the same symbol names, unconditionally? You're not
> making much of a point here.
It is th
> "Not uncommon" is stretching it, since it happens in only one place:
> lib/readline/shell.c.
No. it is not uncommon. See the analysis of duplicated function/macro
names and where they appear. There are around ~100 of them. Note that
this analysis is not very accurate. But the balkpark estimate s
> There is probably no easy regex to match strings bash will tolerate as
> a function name without error. The accepted names vary in several
> contexts.
>
> http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_09_05
>
> "The function is named fname; the application shall
> What would you say the "suggested improvement" is here?
This is implied. If it is agreed that identical function names are not
good by the majority of bash developers, then what I found could be
turned into an explicit suggestion.
Since maybe there is a good reason, I don't want to pretend that
Hi,
It is not uncommon to see the same name is used to defined functions
in different .c files in bash source code.
For example, sh_single_quote is defined in both lib/readline/shell.c
and lib/sh/shquote.c with the exact same signature. The two pieces of
code are slightly different. Do they do th
The following test cases show that the variable length can
significantly affect the runtime in bash. But the variable length
doesn't seem to have a significant effect in some other interpreted
languages, such as python.
I can understand that variables length will slow down the runtime, as
bash is
> Have you tried 'make test'?
No, I didn't. I didn't know it was a target. I just followed the
README in that directory.
--
Regards,
Peng
On Wed, Dec 26, 2018 at 11:35 AM Chet Ramey wrote:
>
> On 12/24/18 10:35 PM, Peng Yu wrote:
> > dirname loadable gives the following error. I think the coreutils'
> > direname's convention is better. Should it be considered as a bug to
> > fix?
> >
> &g
Hi,
I see things like `cd builtins && $(MAKE) ...` in the Makefiles in
bash source code. GNU Make has the option of -C for entering a
directory and make. Is the reason to cd then make for compatibility
with other make's that don't support -C? Thanks.
--
Regards,
Peng
> That code hasn't really changed in almost twenty years. All the testing
> was done long ago.
Do you keep all the testing code in the bash repository? Or you keep
the testing code separately from the bash source? Given the fugal
testing code that is in the bash source, it is doesn't seem that the
> A profiler is exactly what you need here. You should profile your
> script and understand the stuff that actually matters for your goals.
> Otherwise you're just chasing unimportant things.
Again, my goal is not to profile a specific bash script. The goal is
to see what features make bash only f
> You're whacking moles. Use a profiler. That's what they're for.
I've already shown that $() is a major problem to slow down the speed
and I have reduced using its usage in my code and significantly
improved the performance. Nevertheless, it doesn't mean that it is not
necessary to systematical
We are talking about unit testing in the bash C source code, not bash scripts.
On Thu, Dec 27, 2018 at 8:03 PM G. Branden Robinson
wrote:
>
> At 2018-12-27T17:34:49-0800, Eduardo Bustamante wrote:
> > On Thu, Dec 27, 2018 at 5:15 PM Peng Yu wrote:
> > (...)
> > > S
On Thu, Dec 27, 2018 at 7:37 PM G. Branden Robinson
wrote:
>
> At 2018-12-27T18:39:26-0600, Peng Yu wrote:
> > What I meant in my original email is that I want something for testing
> > if there is a command line argument (one or more, the exact number
> > does not mat
> I don't believe that at all. The number of positional parameters is kept
> anyway. It's not recalculated when you compare it to another number, so
> it's just as fast as a simple comparison of two integers.
Getting the number $# is slow.
> And even if it weren't -- if performance is *that* impo
On Thu, Dec 27, 2018 at 12:27 PM Chet Ramey wrote:
>
> On 12/26/18 4:31 PM, Peng Yu wrote:
> > Hi,
> >
> > I'd like to compile hashlib.c to try its main(). But I got the
> > following error. What is the correct commands to compile it? Thanks.
>
> Think a
On Thu, Dec 27, 2018 at 3:19 PM Martijn Dekker wrote:
>
> Op 27-12-18 om 19:22 schreef Chet Ramey:
> > On 12/26/18 10:49 PM, Peng Yu wrote:
> >
> >> Although [[ -z ${1+s} ]] and (($#)) works for testing if $1 is set,
> >> neither of them are uniformly b
Hi,
[[ -v 1 ]] does not work for $1.
$ [ -v 1 ]; echo "$?"
1
$ set -- a
$ [ -v 1 ]; echo "$?"
1
Although [[ -z ${1+s} ]] and (($#)) works for testing if $1 is set,
neither of them are uniformly better performance wise. In this case,
should [[ -v 1 ]] be supported?
set -- $(seq 1)
time for ((i=
Hi,
I'd like to compile hashlib.c to try its main(). But I got the
following error. What is the correct commands to compile it? Thanks.
$ gcc -DPROGRAM='"bash"' -DCONF_HOSTTYPE='"x86_64"'
-DCONF_OSTYPE='"darwin17.7.0"'
-DCONF_MACHTYPE='"x86_64-apple-darwin17.7.0"' -DCONF_VENDOR='"apple"'
-DLOCAL
Hi,
I can not mkdir -p . in /tmp/ via the loadable mkdir. What is the
difference between /tmp/ and other directories? I am on Mac OS X. Is
this a bug in mkdir?
$ cd /tmp
$ mkdir -p -- .
-bash: mkdir: .: Operation not permitted
$ cd ~/
$ mkdir -p -- .
--
Regards,
Peng
eep 10
#echo "$?"
)
$ ./main_INT.sh
^C
$ source enable.sh
$ enable sleep
$ help sleep
sleep: sleep seconds[.fraction]
Suspend execution for specified period.
sleep suspends execution for a minimum of SECONDS[.FRACTION] seconds.
On Mon, Dec 17, 2018 at 1:57 PM Peng Yu wrote:
>
dirname loadable gives the following error. I think the coreutils'
direname's convention is better. Should it be considered as a bug to
fix?
$ dirname -- -a
dirname: usage: dirname string
$(type -P dirname) -- -a
.
--
Regards,
Peng
> I suppose the first thing needed to make that work, and maybe the only
> thing needed to make that work, is agreement on the name of a search path
> environment variable that enable can use to find loadable builtins.
Why not just use an environment variable such as LOADABLES_PATH (just
like the
The artificial ulimit is to tigger the error.
My point is why bash terminates when it runs an external command that
requires a large memory. Shouldn't bash return an exit code on behalf
of the failed command and continue to the next command?
On Sunday, November 6, 2016, Eduardo Bustamante wrote:
Hi, The following example shows that bash uses xmalloc. But it seems
that using xmalloc is not a good practice. Is it better to use malloc
instead of xmalloc? In this test case, after `./main 100` failed I
still want to run the rest commands. So it sounds like malloc is
better.
http://stackove
OK. I see it, which check emails and print prompt.
What factors people need to consider to decide whether to use flex to
perform tokenization and write a customize tokenizer?
Checking emails and printing prompt strictly speaking is not related
with tokenization. Is there an alternative way to org
Hi, I have been checking bash source code. But it is not clear to me
how bash do tokenization as I don't find a lex file.
Could anybody point to me where I should look for the information
about tokenization in bash source code?
--
Regards,
Peng
Hi Chet,
>>> That's the difference: if you're careful with naming and rigorous about
>>> your calling conventions, your one-time-use functions are about as close
>>> as you can get to local functions in bash, but you have to pay attention
>>> to the declaration's side effects.
>>
>> There is at le
Hi Chet,
>> Eduardo A. Bustamante López wrote:
>>> Well, if your scripts are so simple, why use local functions at all?
>> ---
>> Cleanliness, Hygiene...
>
> Please, let's not have this argument again. I think you're all using the
> term `local function' to mean different things.
>
> You seem
On Sat, Mar 14, 2015 at 8:46 AM, Linda Walsh wrote:
>
>
> Peng Yu wrote:
>>
>> Hi,
>>
>> http://mywiki.wooledge.org/ProcessSubstitution
>>
>> The above webpage says the following.
>>
>> commandA <(commandB; [commandB's exit co
On Friday, March 13, 2015, Chet Ramey wrote:
> On 3/12/15 2:13 PM, Peng Yu wrote:
> >>> The -i option obviously works with set. But it is missing in the man
> >>> page. Should this be added?
> >>
> >> No. It's really only there for completen
On Thu, Mar 12, 2015 at 1:29 PM, Greg Wooledge wrote:
> On Thu, Mar 12, 2015 at 01:13:18PM -0500, Peng Yu wrote:
>> One may want to manually set -i option in a bash script for whatever
>> reason. (In this case, it is to check COLUMNS.)
>
> http://mywiki.wooledge.org/BashF
>> The -i option obviously works with set. But it is missing in the man
>> page. Should this be added?
>
> No. It's really only there for completeness, so things like `set $-'
> work as expected without error.
But if something is in the implementation, it should be also in the
documentation, righ
Hi,
The -i option obviously works with set. But it is missing in the man
page. Should this be added?
~$ echo $-
himBH
~$ set +i
~$ echo $-
hmBH
The following lines are from the man page.
set [--abefhkmnptuvxBCEHPT] [-o option-name] [arg ...]
set [+abefhkmnptuvxBCEHPT] [+o option-n
On Mon, Mar 9, 2015 at 2:07 PM, Chet Ramey wrote:
> On 3/8/15 6:05 PM, Stephane Chazelas wrote:
>
>> Are bash questions no longer on topic here? bash-bug used to be
>> the place to discuss bash (before help-bash was created). It maps to the
>> gnu.bash.bug newsgroup. I don't think help-bash maps t
Hi,
http://mywiki.wooledge.org/ProcessSubstitution
The above webpage says the following.
commandA <(commandB; [commandB's exit code is available here from $?])
[commandB's exit code cannot be obtained from here. $? holds
commandA's exit code]
But I am wondering if there is a walkaround to deal
Hi,
The following code works in bash.
for x in a b c; { echo $x; }
But I only find the following in bash man page. Does anybody know
where the above usage is documented? Thanks.
"for name [ [ in [ word ... ] ] ; ] do list ; done"
--
Regards,
Peng
Hi, I use unset to remove x from the environment once the for loop is
finished. Is it the best way to do in bash? Thanks.
for x in a b c
do
echo "$x"
done
unset x
--
Regards,
Peng
uot;-- -n".
-- -n~$
http://software.frodo.looijaard.name/getopt/
--
Regards,
Peng Yu, Ph.D.
Assistant Professor
Dept. Electrical and Computer Engineering & TEES-AgriLife Center for
Bioinformatics and Genomic Systems Engineering (CBGSE)
Texas A&M University
Office: 215F WEB
Phone: (97
Hi,
I have vi mode set.
set -o vi.
Then I type v to enter the vi editor. In the vi editor, I type 'cd
/tmp'. Then, I get the following screen output. Notice that the prompt
does not change immediately after the cd command. Could anybody
reproduce this behavior? Thanks.
~$
cd /tmp
~$ echo $PWD
/
> The bash man page already has ~70 pages manual. I don't like it to grow to
> ~700 pages (like the ABS Guide) with all the working examples you expected.
> :)
Do you use search at all? :) If you use search, it doesn't really
matter if is a 700 page manual.
--
Regards,
Peng
> I would envision that such a completion function would assemble its list
> of possible completions by using your read-from-a-file mechanism and
> augment the list using compgen -a/compgen -b/compgen -A function. It
> would probably also want to handle glob patterns and expand them to
> potential
> Presumably you would also include aliases, shell builtins, and functions
> in this file.
Yes. I just want to replace executables in PATH by the result from my
custom function. I think that aliases, builtins, and functions are all
in the memory of bash already, so it doesn't take extra time to se
> empty lines. There is no programmable completion mechanism to complete
> on non-empty command names.
I'm wondering if it is worthwhile to add such a feature. I have run
into the problem that it is very slow to command complete an
incomplete command especially when other programs are accessing t
Hi,
I want to customize the command completion for completing executables,
I want to search in a file (which includes all the executables in
PATH) rather than the default PATH variable. But I don't see how to do
so, as the following help indicates that it can only configure how to
complete the arg
> There are shortcomings in _the man documentation format_ and one of them
> is that it doesn't work (at least for me...) when the documentation is
> longer than one screen or thereabouts. I've pretty much come to the
> conclusion that any man page that is over a couple of hundred lines is
> a wast
> +1 vote on getting the parameters listed with a leading dollar sign.
> The individual single character is difficult to search for but the
> combination of "$@" and so forth for the others is a useful search
> string. I have often wanted the manual to include the "$@"
> combination instead of jus
> Second, just search for the 'set' builtin, near the bottom of the man page.
Thank for clarifying the usage of set.
I looked closely to the document of set. I just find another problem,
it says the following. However, the description of -- way down below.
It should be the option be described. A
Hi,
As I mentioned previously, there are shortcomings in man bash. Here, I
just point another example. And I hope my suggestion will be
addressed.
As a reasonable search strategy to search for how to set $@ is to
search for '$@' in man bash. The literal word '$@' appears at the
following location
Hi Greg,
> New users do not mess with programmable completion.
Given the context, I though that it was clear that "new users" means
users new to command completion. If it was not clear, I make it
explicit here.
--
Regards,
Peng
Hi,
I looks a little wired why 'until' is the way it is now. According to
the manual until is before the do-done block.
until test-commands; do consequent-commands; done
A common design of until in other language is that it allows the loop
body be executed at least once and test the condition at
> THAT will work. But why are you writing a script to read a shell command
> and then execute it? There is already a program that reads shell commands
This capability will be useful for debugging bash script.
For example, I have a set of commands in a bash script, each of them
output some thing
> WHAT are you trying to DO?
I think that you might completely miss my point. I try to explain it
better. Let me know if this time it makes more sense to you.
I want to execute any command as if the 'execute.sh' does not present,
except that I want to print the command so that I know want the
com
Hi,
~$ cat ../execute.sh
#!/usr/bin/env bash
echo "$@"
"$@"
$ ../execute.sh ls >/tmp/tmp.txt
$ cat /tmp/tmp.txt #I don't want "ls" be in the file
ls
main.sh
'>' will not work unless eval is used in execute.sh.
$ ../execute.sh ls '>' /tmp/tmp.txt
ls > /tmp/tmp.txt
ls: cannot
Hi,
I don't see if there is a way to directly modify $@. I know 'shift'.
But I'm wondering if there is any other way to modify $@.
~$ 1=x
-bash: 1=x: command not found
~$ @=(a b c)
-bash: syntax error near unexpected token `a'
--
Regards,
Peng
> You may safely use getopts (the builtin). Never getopt.
If my understanding is correct, 'getopts' doesn't support the long
format. Hence, it does not satisfy my need and I shall not use it.
--
Regards,
Peng
> And that is enough of this nonsense. I have cited three official manuals
> for you already. Let's move on.
I don't get it. Do you mean both traditional getopt and Debian getopt
are broken. To me it seems that Debian getopt is made to address the
short coming of transitional getopt. Yet you sti
Hi Greg,
> **NEVER** use getopt(1). It is broken. It cannot be made to work
> correctly. Its entire design is flawed.
I don't see these warnings in my systems (macports and ubuntu) (This
is version of getopt on macports and ubuntu is free, I don't see there
is a reason that getopt can not be p
> Then why don't you post that?
Please take a look and see if you have a better solution than the
following code without using quotearg.sh.
/tmp/tmp$ ll *
c d:
total 0
-rw-r--r-- 1 pengy wheel 0 2011-11-15 20:18:11 xx
a b:
total 0
-rw-r--r-- 1 pengy wheel 0 2011-11-15 20:18:15 yy
/tmp/tmp$ ../c
On Tue, Nov 15, 2011 at 6:43 PM, Chris F.A. Johnson
wrote:
> On Tue, 15 Nov 2011, Peng Yu wrote:
>
>>> In any case, combining a command and its arguments in a single
>>> string is almost always the wrong way to go about it.
>>
>> Please compare the t
> In any case, combining a command and its arguments in a single
> string is almost always the wrong way to go about it.
Please compare the two scripts and see if the second one makes more sense.
/tmp$ cat not_convert_args_to_string.sh
#!/bin/bash
options="$2"
find $1 $options
echo find $1
> Why not use the array instead of making it into a single string?
>
> $cmd "${args[@]}"
>
> Why are you using eval or quotearg.sh? It sounds as if you are
> making the process more complicated than it need be.
For the examples that I gave, probably it is not necessary.
I'm yet to make a co
Hi,
I find that I have to make a program quotearg.sh to convert an array
to a string by quoting each element. So that it be used for eval.
I'm not sure if there is a way that I can do eval in bash without
having to use quotearg.sh. If there is no such a way, should
quotearg.sh be added in bash (i
On Sat, Nov 12, 2011 at 10:18 AM, Chet Ramey wrote:
> On 11/12/11 10:41 AM, Peng Yu wrote:
>> Hi,
>>
>> It is strange to me why complete doesn't print anything when it is
>> called in a bash script. I must have misunderstood some fundamentals.
>> Does anybod
On Sat, Nov 12, 2011 at 10:01 AM, Andreas Schwab wrote:
> Peng Yu writes:
>
>> It is strange to me why complete doesn't print anything when it is
>> called in a bash script. I must have misunderstood some fundamentals.
>> Does anybody know why? Thanks!
>
>
Hi,
It is strange to me why complete doesn't print anything when it is
called in a bash script. I must have misunderstood some fundamentals.
Does anybody know why? Thanks!
~$ cat main.sh
#!/usr/bin/env bash
. ~/.bashrc
complete
~$ ./main.sh
~$ complete |head
complete -F _kill kill
complete -F
Hi,
I know from the document that tilde expansion only works if the string
is unquoted (see below)
~$ cd '~/..'
-bash: cd: ~/..: No such file or directory
~$ cd ~/..
/Users$
I'm wondering if I already have a string variable, is there a bash
native to do tilde expansion on it.
var='~/..'
cd $var
Hi,
bash by default searchs in paths specified in the environment variable
PATH (separated by ":"). I'm not aware if there is any cache mechanism
to save the run time (but even so, different terminals still can not
see the same cache, hence each terminal has the overhead to create the
cache). When
On Wed, Nov 9, 2011 at 10:41 AM, Greg Wooledge wrote:
> On Wed, Nov 09, 2011 at 10:29:52AM -0600, Peng Yu wrote:
>> I sourced my ~/.bashrc, which source some other files. It seems the
>> environment variables defined in these files are not seen with env.
>> Why is so?
>
&
On Wed, Nov 9, 2011 at 7:45 AM, Greg Wooledge wrote:
> On Tue, Nov 08, 2011 at 09:46:37PM -0600, Peng Yu wrote:
>> I need to use cron to run some job. I know that cron only set up very
>> basic environment. I'd like to duplicate my login environment.
>
> Just so
On Wed, Nov 9, 2011 at 7:45 AM, Greg Wooledge wrote:
> On Tue, Nov 08, 2011 at 09:46:37PM -0600, Peng Yu wrote:
>> I need to use cron to run some job. I know that cron only set up very
>> basic environment. I'd like to duplicate my login environment.
>
> Just so
Hi,
I need to use cron to run some job. I know that cron only set up very
basic environment. I'd like to duplicate my login environment. But
some environment variables are still not seen when I set the following
crontab entry. Does anybody know how to correctly set up the login
enviroment? (The ma
On Mon, Nov 7, 2011 at 8:29 AM, Dennis Williamson
wrote:
> On Mon, Nov 7, 2011 at 7:23 AM, Peng Yu wrote:
>> Hi Clark,
>>
>>> What do you mean by "1 long argument"?
>>>
>>> [bash-4.2.10] # cat foo.sh
>>> v=" a b c ( a'b
Hi Clark,
> What do you mean by "1 long argument"?
>
> [bash-4.2.10] # cat foo.sh
> v=" a b c ( a'b | "
> set -o noglob
> a=( $v )
> set +o noglob
> for i in "${a[@]}"; do
> echo "$i"
> done
> [bash-4.2.10] # bash foo.sh
> a
> b
> c
> (
> a'b
> |
> [bash-4.2.10] #
I misunderstood the usage
Hi Clark,
>> > v=" a b c ( a'b | "
>> > a=( $v )
>> > echo "${a[@]}"
>
> There's a @ char here.
I see. It's my mistake.
But I want to pass the 6 short arguments instead of 1 long argument to
echo. (echo is just an example, it can be any command that accepts
multiple arguments.)
~$ cat ./main1
1 - 100 of 177 matches
Mail list logo