Re: OT: automation
Am Sonntag, 1. November 2009 13:17:27 schrieb olafbuddenha...@gmx.net: > On Thu, Oct 29, 2009 at 08:34:50AM +0100, Arne Babenhauserheide wrote: > I think that already hints at the problem: while such tools are trivial > enough for the really simple use cases (e.g. "zmv '*.JPG' '*.jpeg'"), > there is always a drift towards adding more and more features, to cover > additional use cases... I try to keep that drift low, exactly because I don't want them to become complex monsters. If I need more complex stuff I can always use (bash) pre- and postprocessing :) For one of my tools I just included one option to kick out most of that drift: python preprocessing -> you can pass a string with python-code to modify a variable (markdown_data). Whenever I need to do some complex preprocessing, I can just put it into that string and need not worry about making the program bigger (it's in python anyway, so almost the whole code for that is "exec(PYTHON_STRING)"). > history), or things that I use *really* often. For example I used to do > my Google and LEO searches by hand: > >netrik google.com/search?q=foo+bar >netrik dict.leo.org?search=foo+bar > For these I use alt-F2 and Konqueror: alt-F2 -> gg:google search -> enter -> konqueror starts the search. Just like a shell, but with images :) >leo() { netrik dict.leo.org?search="$*"; } >go() { netrik google.com/search?q="$*"; } Which look almost exactly like the functions you can define in konqueror - but I'll try to remember them for shell work. > Things like renaming multiple files, or global search and replace, do > not fall in either of the categories mentioned above: they are trivial > enough to type them out (or recall from history) Not for me at least... I suffered escaping-hell often enough to write scriipts to avoid it :) > > I think it might be "Python feels like home for me"-specific ;) > > Which just proves my point really: generic knowledge beats more specific > tools :-) You prefer Python over sed, even though it's more complicated > to accomplish certain tasks, because you can use the Python knowledge > for more things. And for the very same reason shell scripting is better > than specific tools most of the time. Touché :) > > Might be related to having many spaces and non-letter characters in > > filenames, since the OS and tools damn well should not restrict how I > > name my files :) > > It shouldn't, and it doesn't. But that doesn't mean it's a good idea to > make use of this possibility... I tend to differ. I use GUIs most of the time, and most of my friends do that, too, so my files with spaces etc. are far more convenient (easier to read) than files with underscores and escapes like "ae" for "ä". But many shell commands make it inconvenient to work with them (mostly for stuff like argument parsing, though). > In the same vein, you could argue that a programming language should not > restrict characters you can use in an identifier -- and indeed some > languages (PHP being one of them IIRC) allow pretty much everything in > identifies, including spaces. Python3 now allows about everything except spaces - and I hope that over the next few years most programs will switch to it, so it can become the standard installed version. > Still it's not very wise to use such > identifiers. Just use underscores and profit. I'd have changed the last word into "suffer eye pain" - at least for normal files :) But in code I agree. I want it to be readable to others more than I want it to look nice to me. Best wishes, Arne signature.asc Description: This is a digitally signed message part.
Re: subhurds etc.
Am Sonntag, 1. November 2009 13:52:47 schrieb olafbuddenha...@gmx.net: > The original idea for versioning filesystems was to automatically keep > track of individual changes, and it failed magnificently. As far as I know they didn't have atomic commits back then - am I right in that? > This is BTW the same reason why I consider the manual git-gc to be a > feature, as opposed to other systems that try to do various kinds of > automatic packing and garbage collection... He, it goes on ;) For Mercurial I know that it performs automatic packing, but no garbage collection, because its repository model doesn't need garbage collection. Keyphrase: "disk-access optimized compressed incremental diffs with snapshots, and one data file per file in the repo." -> http://hgbook.red-bean.com/read/behind-the-scenes.html > Well, actually the snapshotting functionality is kind a side effect of > atomic updates, which comes almost for free. But it's generally seen as > a feature for easing backups. How exactly do they differ from a normal file system with a Mercurial/Git backend for revisioning with a time-based commit schedule? Best wishes, Arne --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- - singing a part of the history of free software - http://infinite-hands.draketo.de signature.asc Description: This is a digitally signed message part.
Re: unionmount branches
Hello, On Thu, Oct 29, 2009 at 07:11:16AM +0100, olafbuddenha...@gmx.net wrote: > On Sun, Oct 25, 2009 at 06:10:42PM +0200, Sergiu Ivanov wrote: > > On Sat, Oct 24, 2009 at 06:46:49AM +0200, olafbuddenha...@gmx.net > > wrote: > > > > While I do think that such main a "unionmount" branch is probably a > > > good idea, it should contain only the "approved" patches; while > > > those still in development would better be placed in true topic > > > branches... > > > > OK. I'll stick to this in the future. Shall I move the yet > > not-completely-approved patches away from master-unionmount into > > corresponding topic branches? > > I think so. However, it's probably better not to change the existing > master-unionmount branch, but rather drop it alltogether and create a > new one with a different name once you actually start adding the > approved patches. Otherwise, people who already checked out the original > branch will get in trouble... OK, I'll do that. > (Also, I still don't get the point of the "master-" prefix. This is not > CVS, where we needed to remember where the branch comes from, as it was > hard to figure it out from history; and it was crucial to know, because > merging had to be handled in a strictly controlled manner to work at > all...) Frankly speaking, I'm generally inclined to doubt the usefulness of this prefix, too. This is quite fortunate, since I can create a new branch ``unionmount'', thus both achieving a better name and creating a new branch of approved patches only. Regards, scolobb
Re: nsmux: Build system
Hello, On Thu, Oct 29, 2009 at 07:19:47AM +0100, olafbuddenha...@gmx.net wrote: > On Sun, Oct 25, 2009 at 06:13:47PM +0200, Sergiu Ivanov wrote: > > > I think I have an extra argument for the second way: if I create a > > separate commit before the merge, things will stop working, > > Yeah, you are right: you can't adapt the build system of the main Hurd > tree before merging in the directory; and you can't merge the directory > in without adapting the build system in the main Hurd... I forgot that > adapting the build system requires changes in the global tree, not only > in the subdirectory in question. Indeed; I forgot this detail, too. > > and, as you once said, a commit is a set of changes after which > > everything works :-) > > Indeed. The established term for this is "bisectability" -- I just > didn't know it at the time :-) Aha, great :-) I'll keep this in mind, thank you :-) Regards, scolobb
Re: subhurds etc.
Hello, On Sun, Nov 01, 2009 at 01:52:47PM +0100, olafbuddenha...@gmx.net wrote: > On Wed, Oct 28, 2009 at 06:51:43PM +0200, Sergiu Ivanov wrote: > > > I do backups of sensitive information, but the reason I want a > > snapshotting filesystem for is automated decision when to do the > > backup. > > There is no automated decision, that was my whole point! > > The original idea for versioning filesystems was to automatically keep > track of individual changes, and it failed magnificently. The new > snapshotting filesystems OTOH can work, because they do *not* try to > automate it -- that's exactly what makes them different. Hm, I didn't realize automated snapshotting was so bad. Though I can acknowledge that I cannot think of an acceptable way to decide when to do snapshots. > > OTOH, a snapshotting filesystem is not exactly about backups IMHO. > > Well, actually the snapshotting functionality is kind a side effect of > atomic updates, which comes almost for free. But it's generally seen as > a feature for easing backups. Clear, I'll correct my conceptions :-) Regards, scolobb
Re: Could the filter for nsmux be included in incubator.git?
Hello, On Mon, Nov 02, 2009 at 12:31:33PM +0100, olafbuddenha...@gmx.net wrote: > On Tue, Oct 27, 2009 at 09:51:29PM +0200, Sergiu Ivanov wrote: > > On Tue, Oct 27, 2009 at 07:14:17PM +0100, Thomas Schwinge wrote: > > > > [...] named filter there. (Or perhaps filterfs? -- your dice.) > > > > The name should stay ``filter'' because a filter will not publish a > > virtual filesystem. > > I must say that I always thought of "filter" as an internal codename > only; with a proper name yet to be determined... Yes, I can remember that we haven't yet decided on the final name for the filter translator. By sticking to ``filter'' I am not trying to supersede this decision; there just has to be a name by which to call this translator, so I chose ``filter'' as the most obvious one. I don't think it would be very difficult to rename it once we consider that appropriate. Also, IIRC, the filter is expected to be run via symbolic links and extract the name of the translator to be filtered out from its command line arguments (I mean argv[0]); in this case the real name of the filter is not extremely important, since the final user will rarely use that name. Regards, scolobb
Re: unionfs: ULFS information storage issues
Hello, On Thu, Oct 29, 2009 at 07:30:41AM +0100, olafbuddenha...@gmx.net wrote: > On Mon, Aug 17, 2009 at 01:03:30PM +0300, Sergiu Ivanov wrote: > > > If one would like to keep *both* the information about the filesystems > > *and* the ports to their root nodes in *a single* place, one would > > have two choices: either add something like a ``port'' field to each > > entry in the ulfs_chain, thus duplicating the port list in the > > netfs_root_node and still leaving the necessity of synchronizing > > things explicitly; or keep the information about the filesystems (i.e. > > the information stored in ulfs_chain) in *every* node, thus > > duplicating the same bits of information across a (potentially) large > > number of locations. > > Obviously we can't keep the information for all the nodes and for the > constituent filesystmes in one place, considering that there are several > nodes per filesystem. All we want is to make sure that the information > in the list of any particular node and in the list of filesystems can be > easily associated. > > I can see at least three ways to do it: > > - Make sure that the per-node lists can also easily be addressed by a > numeric index, so we can read both the filesystem list and the > per-node lists with a simple loop counter. I guess the efficient implementation of this approach is possible either via arrays or libihash (if I understand correctly what its purpose is). At first I thought that arrays are a clumsy solution, but having looked at it more, I am not so sure of it. Arrays would have to be resized when the list of unioned directories is modified, which isn't normally a frequent operation. Provided this reasoning, I wouldn't be much of a fan of libihash in this context. > - Make sure that it's easy to figure out the numeric index of any given > per-node list entry, so when iterating the per-node list, we can > directly look up the corresponding entry in the filesystem list > without having to keep an extra counter. This might be possible by keeping an index in each entry in the per-node lists, but I'm not sure whether this is really efficient, because if both global and per-node lists are still implemented as linked lists, knowing the index is not an advantage at all. OTOH, we could combine the current and the previous points and keep the global list in an array and implement per-node lists as linked lists. To iterate the lists in parallel, one would traverse the per-node list and directly access the corresponding global list entry by the index in the current per-node list entry. > - Let the per-node entries keep pointers to the entries in the > filesystem list, so we can access them directly when iterating the > per-node lists. I think I'll declare myself a fan of this idea :-) IMHO, it's the most efficient and elegant of all :-) It keeps the advantage of linked lists, and the cost of keeping the local and per-node lists synchronized is still minimal. Regards, scolobb
[PATCH] Implement the sync libnetfs stubs.
* netfs.c (netfs_attempt_sync): Sync every writable directory associated with the supplied node. (netfs_attempt_syncfs): Send file_syncfs to every writable directory maintained by unionfs. --- Hello, On Mon, Oct 26, 2009 at 01:03:29AM +0100, olafbuddenha...@gmx.net wrote: > On Mon, Aug 17, 2009 at 11:44:59PM +0300, Sergiu Ivanov wrote: > > > @@ -282,7 +283,45 @@ error_t > > netfs_attempt_sync (struct iouser *cred, struct node *np, > > int wait) > > { > > - return EOPNOTSUPP; > > + /* The error we are going to report back (last failure wins). */ > > + error_t final_err = 0; > > + > > + /* The index of the currently analyzed filesystem. */ > > + int i = 0; > > I think the initialization of "i" should be as close to the loop as > possible -- after all, it's a loop counter... I moved it closer to the loop itself, but I didn't move it further than locking the mutex, because locking the mutex is also a part of initialization, and I am somehow inclined to keep variable definitions before operations (but this is subjective). > > +/* Get the information about the current filesystem. */ > > +err = ulfs_get_num (i, &ulfs); > > +assert (err == 0); > > Minor nitpick: it's more common to do such checks with "!err". Fixed. > > + > > +/* Since `np` may not necessarily be present in every underlying > > + directory, having a null port is perfectly valid. */ > > +if ((node_ulfs->port != MACH_PORT_NULL) > > + && (ulfs->flags & FLAG_ULFS_WRITABLE)) > > Not sure whether I asked this before: is there actually any reason not > to attempt syncing filesystems without FLAG_ULFS_WRITABLE as well?... > > (I don't know how file_sync() or file_syncfs() bahave on filesystems or > nodes that really are not writable -- but IIRC that's not what > FLAG_ULFS_WRITABLE conveys anyways?...) A quick search didn't reveal any indications about whether these RPCs should fail on a really read-only filesystem, so, technically, syncing such filesystems should not be a problem. At first, I could not see *conceptual* reasons for syncing directories not marked with FLAG_ULFS_WRITABLE flag, but I can see one now. Since this unionfs-specific flag only influences the work of unionfs, and unionfs does not control *regular* files in unioned directories, a user may modify files in directories not marked with FLAG_ULFS_WRITABLE. On invocation of file_sync{,fs} on such a directory, these changes should be expected to be synced, too. That's why I think I agree with you and I made unionfs sync every unioned directory. > > + /* Sync every writable filesystem maintained by unionfs. > > + > > + TODO: Rewrite this after having modified ulfs.c and node.c to > > + store the paths and ports to the underlying directories in one > > + place, because now iterating over both lists looks ugly. */ > > + node_ulfs_iterate_unlocked (netfs_root_node) > > + { > > +error_t err; > > + > > +/* Get the information about the current filesystem. */ > > +err = ulfs_get_num (i, &ulfs); > > +assert (err == 0); > > + > > +/* Note that, unlike the situation in netfs_attempt_sync, having a > > + null port here is abnormal. */ > > Perhaps it would be helpful to state more explicitely that having a NULL > port *on the unionfs root node* is abnormal -- I didn't realize this > point at first. > > (Maybe you should actually assert() this.) Done. Regards, scolobb --- netfs.c | 83 -- 1 files changed, 80 insertions(+), 3 deletions(-) diff --git a/netfs.c b/netfs.c index 89d1bf6..84bc779 100644 --- a/netfs.c +++ b/netfs.c @@ -1,5 +1,6 @@ /* Hurd unionfs - Copyright (C) 2001, 2002, 2003, 2005 Free Software Foundation, Inc. + Copyright (C) 2001, 2002, 2003, 2005, 2009 Free Software Foundation, Inc. + Written by Moritz Schulte . This program is free software; you can redistribute it and/or @@ -282,7 +283,45 @@ error_t netfs_attempt_sync (struct iouser *cred, struct node *np, int wait) { - return EOPNOTSUPP; + /* The error we are going to report back (last failure wins). */ + error_t final_err = 0; + + /* The information about the currently analyzed filesystem. */ + ulfs_t * ulfs; + + /* The index of the currently analyzed filesystem. */ + int i = 0; + + mutex_lock (&ulfs_lock); + + /* Sync every writable directory associated with `np`. + + TODO: Rewrite this after having modified ulfs.c and node.c to + store the paths and ports to the underlying directories in one + place, because now iterating over both lists looks ugly. */ + node_ulfs_iterate_unlocked (np) + { +error_t err; + +/* Get the information about the current filesystem. */ +err = ulfs_get_num (i, &ulfs); +assert (!err); + +/* Since `np` may not necessarily be present in every underlying + directory, having a null port is perfectly valid. */ +if ((node_ulfs->port !=
[PATCH 3/3] Add the mountee to the list of merged filesystems.
* mount.c (start_mountee): Add the mountee's filesystem to the list of merged filesystems. * node.c (node_init_root): Take into consideration the fact that an empty string refers to the mountee root. * ulfs.c (ulfs_check): Likewise. (ulfs_register): Don't check whether "" is a valid directory. --- Hello, On Fri, Oct 30, 2009 at 10:13:09AM +0100, olafbuddenha...@gmx.net wrote: > On Mon, Aug 17, 2009 at 08:55:37PM +0300, Sergiu Ivanov wrote: > > On Sun, Aug 16, 2009 at 07:56:03PM +0200, olafbuddenha...@gmx.net wrote: > > > On Mon, Aug 03, 2009 at 08:42:27PM +0300, Sergiu Ivanov wrote: > > > > > + /* A path equal to "" will mean that the current ULFS entry is the > > > > + mountee port. */ > > > > + ulfs_register ("", 0, 0); > > > > > > This comment would actually be more appropriate near the definition of > > > the actual data structure and/or the function filling it in... > > > > > > Of course, it doesn't hurt to mention it here *in addition* to that :-) > > > > I've added the corresponding comment to ulfs_register, but I didn't > > add anything to variable or structure declarations, because I'm not > > sure whether it would be suitable to describe the convention in the > > comment to the declaration of struct ulfs or in the comment to the > > declaration of ulfs_chain. > > The latter I'd say -- it's not really a property of the ulfs structure > itself, but rather a special entry in the list... I've added the corresponding comment to both declaration and definition of ulfs_chain_start (in ulfs.[ch]). > > Also, in ulfs.h, both are near the declaration of ulfs_register, so it > > seems to me that it's sufficient to describe the convention in the > > comment to ulfs_register only. > > Perhaps. Though generally, properly documenting data structures is more > important than documenting functions... So I'd rather do it the other > way round :-) Hm, I didn't know this convention; I'll keep it in mind. I have eventually commented both the data structures and the functions, which, I hope, is not a problem :-) > > > I actually wonder whether the patches are really split up in the > > > most useful manner... But I'd rather leave it as is now. > > > > I'm asking out of pure interest: what different way of splitting the > > functionality across patches do you envision? > > Quite frankly, I don't know :-) I see :-) > > diff --git a/mount.c b/mount.c > [...] > > @@ -535,8 +539,13 @@ node_init_root (node_t *node) > > break; > > > >if (ulfs->path) > > - node_ulfs->port = file_name_lookup (ulfs->path, > > - O_READ | O_DIRECTORY, 0); > > + { > > + if (!ulfs->path[0]) > > You forgot to indent the contents of the block I think?... Yeah, sure :-( Corrected. > > diff --git a/ulfs.c b/ulfs.c > [...] > > @@ -212,14 +216,16 @@ ulfs_for_each_under_priv (char *path_under, > >return err; > > } > > > > -/* Register a new underlying filesystem. */ > > +/* Register a new underlying filesystem. A null path refers to the > > + underlying filesystem; a path equal to an empty string refers to > > + the filesystem of the mountee. */ > > This comment is very confusing, as "underlying filesystem" is used in > two different meanings side by side... Please try to reword it. > > It's very unfortunate that the original unionfs often refers to the > constituents of the union as "underlying filesystems" -- perhaps it > would be useful to change this globally... If I understood you correctly, we decided to drop the word combination ``underlying filesystem'' in the meaning of ``unioned directory'' in the whole unionfs. I'll do this in a separate patch soon (I hope). > Aside from these formalities, this patch looks fine to me :-) Great :-) Regards, scolobb --- mount.c | 14 +- mount.h |4 +++- node.c | 15 --- ulfs.c | 23 ++- ulfs.h |8 ++-- 5 files changed, 52 insertions(+), 12 deletions(-) diff --git a/mount.c b/mount.c index e325d3d..64dc63b 100644 --- a/mount.c +++ b/mount.c @@ -27,6 +27,7 @@ #include "mount.h" #include "lib.h" +#include "ulfs.h" /* The command line for starting the mountee. */ char * mountee_argz; @@ -138,7 +139,9 @@ start_mountee (node_t * np, char * argz, size_t argz_len, int flags, return err; } /* start_mountee */ -/* Sets up a proxy node and sets the translator on it. */ +/* Sets up a proxy node, sets the translator on it, and registers the + filesystem published by the translator in the list of merged + filesystems. */ error_t setup_unionmount (void) { @@ -165,6 +168,15 @@ setup_unionmount (void) if (err) return err; + /* A path equal to "" will mean that the current ULFS entry is the + mountee port. */ + ulfs_register ("", 0, 0); + + /* Reinitialize the list of merged filesystems to take into account + the newly added mountee's filesystem. */ + ulfs_check (); + node_init
[PATCH] Implement the sync libnetfs stubs.
* netfs.c (netfs_attempt_sync): Sync every directory associated with the supplied node. (netfs_attempt_syncfs): Send file_syncfs to every directory maintained by unionfs. --- Hello, On Wed, Nov 04, 2009 at 06:56:41PM +0200, Sergiu Ivanov wrote: > > That's why I think I agree with you and I made unionfs sync every > unioned directory. I'm terribly sorry, I've posted the wrong version of the patch :-( This version is correct. Regards, scolobb --- netfs.c | 79 -- 1 files changed, 76 insertions(+), 3 deletions(-) diff --git a/netfs.c b/netfs.c index 89d1bf6..440265d 100644 --- a/netfs.c +++ b/netfs.c @@ -1,5 +1,6 @@ /* Hurd unionfs - Copyright (C) 2001, 2002, 2003, 2005 Free Software Foundation, Inc. + Copyright (C) 2001, 2002, 2003, 2005, 2009 Free Software Foundation, Inc. + Written by Moritz Schulte . This program is free software; you can redistribute it and/or @@ -282,7 +283,44 @@ error_t netfs_attempt_sync (struct iouser *cred, struct node *np, int wait) { - return EOPNOTSUPP; + /* The error we are going to report back (last failure wins). */ + error_t final_err = 0; + + /* The information about the currently analyzed filesystem. */ + ulfs_t * ulfs; + + /* The index of the currently analyzed filesystem. */ + int i = 0; + + mutex_lock (&ulfs_lock); + + /* Sync every directory associated with `np`. + + TODO: Rewrite this after having modified ulfs.c and node.c to + store the paths and ports to the underlying directories in one + place, because now iterating over both lists looks ugly. */ + node_ulfs_iterate_unlocked (np) + { +error_t err; + +/* Get the information about the current filesystem. */ +err = ulfs_get_num (i, &ulfs); +assert (!err); + +/* Since `np` may not necessarily be present in every underlying + directory, having a null port is perfectly valid. */ +if (node_ulfs->port != MACH_PORT_NULL) + { + err = file_sync (node_ulfs->port, wait, 0); + if (err) + final_err = err; + } + +++i; + } + + mutex_unlock (&ulfs_lock); + return final_err; } /* This should sync the entire remote filesystem. If WAIT is set, @@ -290,7 +328,42 @@ netfs_attempt_sync (struct iouser *cred, struct node *np, error_t netfs_attempt_syncfs (struct iouser *cred, int wait) { - return 0; + /* The error we are going to report back (last failure wins). */ + error_t final_err = 0; + + /* The index of the currently analyzed filesystem. */ + int i = 0; + + /* The information about the currently analyzed filesystem. */ + ulfs_t * ulfs; + + mutex_lock (&ulfs_lock); + + /* Sync every unioned directory maintained by unionfs. + + TODO: Rewrite this after having modified ulfs.c and node.c to + store the paths and ports to the underlying directories in one + place, because now iterating over both lists looks ugly. */ + node_ulfs_iterate_unlocked (netfs_root_node) + { +error_t err; + +/* Get the information about the current filesystem. */ +err = ulfs_get_num (i, &ulfs); +assert (err == 0); + +/* Note that, unlike the situation in netfs_attempt_sync, having a + null port on the unionfs root node is abnormal. */ +assert (node_ulfs->port != MACH_PORT_NULL); +err = file_syncfs (node_ulfs->port, wait, 0); +if (err) + final_err = err; + +++i; + } + + mutex_unlock (&ulfs_lock); + return final_err; } /* lookup */ -- 1.6.4.3
dash, PATH
Hello. I installed hurd in qemu from L1 DVD iso and only now I noticed an error at the very end of native-install: Setting up apt-utils (0.7.24) ... I just make sure that /libexec/runsystem is properly updated. ./MAKEDEV: 53: function: not found eval: 1: hd0s1: not found ./MAKEDEV: 56: Syntax error: "}" unexpected Couldn't determine root partition, sorry. Some info about what to do afterwards follows. Unlike previous time I had to set /etc/fstab entirely by myself (all lines were commented out and the content didn't reflect the real partions). All I did followed my previous attempt except this time I allowed dash to be the default /bin/sh shell. I guess this doesn't have much to do with the error, or am I wrong? For some reason PATH consists only of /bin, /usr/bin, /usr/local/bin, /usr/games. But /sbin etc. are missing - is that intentional? This didn't happen with bash. PS: I tried reinstalling one more time. I checked for typos and there were none. So now I am starting to believe that selecting dash has something to do with the different behaviour. In case that all of this is known, how do I make it work properly? Now on a different topic: I know this is kind of in FAQ but it is quite unfortunate that hurd cannot deal with SATA disks. Now that I have learnt that hurd is quite usable for me I would love to run it on real machine, because qemu is limiting. I am not going to ask when the drivers will be available. I would like to know what part of hurd/mach is responsible for loading and handling these drivers. What should I pay attention to if I would like to learn more about writing/porting drivers for serial SCSI. I know I might not be able to write them, still it would help to know how this works (is supposed to work) with hurd. Where are the actual drivers supporting PATA drivers? I tried to search the code but I found nothing as I am not familiar with the composition and hierarchy of hurd yet. Best wishes Jakub Daniel
[PATCH 2/3] Implement mountee startup.
* mount.c (mountee_node): New variable. (mountee_root): Likewise. (mountee_started): Likewise. (start_mountee): New function (based on node_set_translator in nsmux). (setup_unionmount): New function. * mount.h (mountee_root): New variable. (mountee_started): Likewise. (start_mountee): New function. (setup_unionmount): New function. * netfs.c (netfs_validate_stat): Start the mountee at the first invocation. --- Hello, On Thu, Oct 29, 2009 at 06:37:54AM +0100, olafbuddenha...@gmx.net wrote: > On Mon, Aug 17, 2009 at 07:15:09PM +0300, Sergiu Ivanov wrote: > > > AIUI, this title line does say that some code for starting the mountee > > is added. Or do I understand something wrong? > > The fact that you mention the specific detail "in a lazy fashion", > creates the impression that this is the major point -- it sounds like > the purpose of this patch is changing the code to use "lazy fashion" > instead of some other approach used before... It's not at all clear that > this patch adds the mountee starting code for the first time. Hm, I can see your point. > > I changed the title of this patch because the very first one -- ``Add > > the code for starting up the mountee'' -- is too verbose (almost every > > patch *adds* some code). > > I don't think it is too verbose. > > In either case, I don't quite understand that you try to make it less > verbose by adding more detail?... :-) > > "Add mountee startup" or "implement mountee startup" would be perfectly > fine -- saying the same thing in less words. Or even just "start > mountee", if you think the "add" is superfluous. Ah :-) I've changed the name to ``Implement mountee startup''. I chose the titles for this series of patches quite a time ago, so, frankly speaking, I've got so much used to them that I cannot analyze them properly. > > I won't submit patch with corrections you mention in this E-mail right > > away, because the corrections are mainly about changing some comments > > or strings and I think it will be harder for you to review the changes > > if I post the whole patch again. > > Well, I can't really give a final ACK without seeing the whole patch in > its final form... I'm sending the current version of the patch in this mail. I didn't intend to prevent you from reviewing the patch; rather I wanted to save you from seeking a couple of minor modifications in the whole patch. > > Changed to: ``The mountee will be sitting on this node. This node is > > based on the netnode of the root node (it is essentially a clone of > > the root node), so most RPCs on this node can be automatically carried > > out correctly. Note the we cannot set the mountee on the root node > > directly, because in this case the mountee's filesystem will obscure > > the filesystem published by unionfs.'' > > "most RPCs ont this node can be automatically carried out correctly" is > way too vague... It's not ever clear what "correct" means in here, no > what RPCs you mean. > > I think you should say that the mountee is set on a (clone of) the > unionfs root node, so that unionfs appears as the parent translator of > the mountee. AIUI that's the idea behind it, right? What exactly do you mean by ``parent translator''? I must acknowledge I haven't heard the term ``parent'' applied to translators (I can attribute it to processes only). Do you want to say that the goal of setting the mountee on a clone of the root node is to make unionfs appear as the underlying translator to the mountee? If so, then yes, the idea behind cloning is really this one and I will change the comment accordingly. > > > Why are you passing O_READ, anyways?... > > > > The flags which I pass to start_mountee are used in opening the port > > to the root node of the mountee. (I'm sure you've noticed this; I'm > > just re-stating it to avoid ambiguities). Inside unionfs, this port > > is used for lookups *only*, so O_READ should be sufficient for any > > internal unionfs needs. Ports to files themselves are not proxied by > > unionfs (as the comment reads), so the flags passed here don't > > influence that case. > > Hm, but wouldn't unionfs still need write permissions to the directories > for adding new entries, when not in readonly mode?... Well, obviously, O_READ permission on a directory is sufficient to create files in it. I got this suspicion from looking at unionfs code: when unionfs looks up a directory, it creates a node for it. When io_stat is invoked on this node, unionfs opens a port to the corresponding directory with O_READ flags. Subsequent requests to create regular files (for instance) are carried out on this (O_READ) port. I wrote the following to test this supposition: #define _GNU_SOURCE 1 #include #include #include int main (void) { mach_port_t dir_p = file_name_lookup ("/tmp/", O_READ, 0); mach_port_t file_p = file_name_lookup_under (dir_p, "new-file", O_READ | O_CREAT, 0666); mach_port_deallocate (mac
Re: OT: automation
Am Sonntag, 1. November 2009 11:54:28 schrieb olafbuddenha...@gmx.net: > Another variant is zmv, which is part of zsh. I comes with its whole own > language for specifying non-trivial filename patterns... Which is just > idiocy. People would be much better off spending the time on learning > generic for and sed instead, which comes in handy in other situations as > well, instead of a single-purpose language only for this. At least I know now what people mean with "powerful zsh" :) But for that I have Python, which I can also use to write most kinds of programs :) > > What I use of my shell is for loops, some globbing, pipes with sed, > > grep, find, and such. > > Yeah, these are exactly the generic tools I mean. Oh, ok - I though you meant more esoteric stuff :) They came slowly, though - one little trick at a time. > So you want: > >find -type f -print0|xargs -0 -L 1 echo sed -i 's/orig/new/' -L is what I searched for hours - many times now - but I never knew exactly how to search for it... It saves me from find | sed s/^/\"/ | sed s/$/\"/ | xargs (and some more evil constructs...) Many thanks! > Where does escaping come in here at all?... (Unless you mean the actual > sed script, which is usually a constant string, and it's generally a > good idea to put it in single quotes -- I never even considered leaving > these out...) I mean stuff like this: sed 's/blah/blubb/blau/' (I didn't know that I can just enclose the whole 's///' in quotes - but now that I see it, it's clear - it's just an argument) > Of course there are other situations where escaping is indeed necessary. > However, most of the time it boils down to learning to use "$i" instead > of bare $i: > >for i in *; do mv "$i" `<<<"$i" sed 's/\.JPG$/\.jpeg$/'`; done > > I agree though that quoting is the single most problematic issue in > shell scripting. What do the <<< do in there? > Of course you can teach a writer to use text files, because text files > are more powerful -- but they are only more powerful if you also teach > him the stuff which can actually deal better with text files... Which is > shell scripting. And version tracking and website creation. That's one thing I had to manage for my free roleplaying system. Get every contributor to write txt or at least rtf but not Open Document files, which are terrible to version (why did they have to split the data into several files when they use XML anyway?). > > It's far less versatile than the shell, but it does what he needs and > > he doesn't have to spend as much time learning things he won't really > > need (improving ones writing skills takes enough learning time). > > I don't buy this kind of arguments. Most people nowadays spend *a lot* > of their time working with computers: many hours a day. And every > regular computer user will need to do some less common stuff now and > then. Some shell scripting skills will always pay off. Most computer users nowadays never enter a shell - and never means never, because they don't even know they have a shell. The equivalent to shell scripting is for them stuff like Automator and WorKflow: -> http://www.kde-apps.org/content/show.php?content=43624 (Automator is not free, so it gets no cookie - I mean no link :) ) Best wishes, Arne --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- Ein Mann wird auf der Straße mit einem Messer bedroht. Zwei Polizisten sind sofort da und halten ein Transparent davor. "Illegale Szene. Niemand darf das sehen." Der Mann wird ausgeraubt, erstochen und verblutet, denn die Polizisten haben beide Hände voll zu tun. Willkommen in Deutschland. Zensur ist schön. (http://draketo.de) --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- signature.asc Description: This is a digitally signed message part.
Re: Mercurial vs. git
Am Sonntag, 1. November 2009 09:53:44 schrieb olafbuddenha...@gmx.net: > > > The point is that there is hardly anything you need to learn for > > > specific tasks like integrating many branches -- once you understand > > > the basics, most specific tasks become trivial! > > > > That's not really a specific task to me, but basic operation... > > Well, it was you who mentioned it as something not everyone needs to > know... :-) damn... *scans for inconsistency in brain* I assume it's because it isn't really basic for me to do with git, but it is trivial for me to do with Mercurial... (In git I still need to read my self.written little "news-contributing"-guide in the wiki everytime I want to merge a branch...) > there isn't really anything new you need to learn to be efficient in > this specific situation, once you have a good understanding of the > fundamental concepts. And the same is true for most other use cases too. And here's exactly where I see a problem: You need the good understanding for beginning - but once the thought "this is hard" has set its root in the mind, it's damn hard to rip out again (which in turn hampers further learning). So it's most efficient to make the entry very easy for people. > You are complicating it unnecessarily. If you really want to compare > absolute times, just compare the total time spent on the work -- > including both actual programming and all the versioning stuff. Jupp, that fits it. > another very important effect: it reduces frustration. And that's rather > hard to measure in terms of time... But that's were Git really screwed up in my case, while Mercurial excelled... > > But that really defeats everything you write later about efficient > > workflows. > > No, it doesn't. I was talking about how Git allows being very efficient, > once you get a good understanding of the fundamentals. The fact that > most people don't bother, doesn't change this at all :-) But for most users it invalidates the advantages you write about. > You can only make good use of Git's potential if you bother to get past > that, and actually learn the universal underlying concepts. The fact > that you can do pretty much anything, and without having to learn > additional features, once you have learned these concepts, makes Git > more efficient for programmers IMHO -- *if* they are willing to learn. My question here is, how often you need these advanced workflows - and how easy it is to teach them to others when you want to interact. > (As I said before, I wouldn't necessarily recommend Git for > non-programmers.) (...while I want a tool which I can use together with artists and writers - and DAUs: "dümmste anzunehmende user" == "dumbest expectable user" - but that is still efficient to use for me, so we have quite different requirements - which explains many of our different opinions, I think :) ). Best wishes, Arne --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- Ein Mann wird auf der Straße mit einem Messer bedroht. Zwei Polizisten sind sofort da und halten ein Transparent davor. "Illegale Szene. Niemand darf das sehen." Der Mann wird ausgeraubt, erstochen und verblutet, denn die Polizisten haben beide Hände voll zu tun. Willkommen in Deutschland. Zensur ist schön. (http://draketo.de) --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- signature.asc Description: This is a digitally signed message part.
x11
Hello, i am getting this: . . . Setting up x11-common (1:7.4+4) insserv: warning: script 'K01pcmcia' missing LSB tags and overrides insserv: warning: script 'pcmcia' missing LSB tags and overrides insserv: warning: script 'ifupdown-clean' missing LSB tags and overrides insserv: warning: script 'ifupdown' missign LSB tags and overrides insserv: There is a loop between service umountfs and ifupdown if stopped insserv: loop involving service ifupdown at depth 4 insserv: loop involving service networking at depth 3 insserv: loop involving service umountfs at depth 6 insserv: loop involving service umountnfs at depth 6 insserv: exiting now without changing boot order! dpkg: error processing x11-common (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: x11-common E: Sub-process /usr/bin/dpkg returned an error code (1) I also get some errors everytime i try to apt-get something: Errors were encountered while processing: x11-common netbase libice6 libsm6 libwww-perl libxext6 libxt6 libxmu6 libxwa7 libxml-parser-perl libxml-sax-expat-perl libxml-simple-perl E: Sub-process /usr/bin/dpkg returned an error code (1) Thanks Jakub