-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 02/19/14 14:37, Gevisz wrote: > On Tue, 18 Feb 2014 22:53:12 +0400 > the <the.gu...@mail.ru> wrote: > > On 02/18/14 17:56, Gevisz wrote: >>>> On Mon, 17 Feb 2014 23:30:42 -0600 Canek Peláez Valdés >>>> <can...@gmail.com> wrote: >>>> >>>>> On Mon, Feb 17, 2014 at 8:05 PM, Gevisz <gev...@gmail.com> >>>>> wrote: [ snip ] >>>>>> How can you be sure if something is "large enough" if, as you >>>>>> say below, you do not care about probabilities? >>>>> >>>>> By writing correct code? >>>> >>>> No, by arguing that fixing bugs in a 200K line program is as easy >>>> as fixing a bug in 20 10K line programs. It is just not true, just >>>> the opposite. >>>> >>>>>>>> SysVinit code size is about 10 000 lines of code, OpenRC >>>>>>>> contains about 13 000 lines, systemd — about 200 000 >>>>>>>> lines. >>>>>>> >>>>>>> If you take into account the thousands of shell code that >>>>>>> SysV and OpenRC need to fill the functionality of systemd, >>>>>>> they use even more. >>>>>>> >>>>>>> Also, again, systemd have a lot of little binaries, many of >>>>>>> them optional. The LOC of PID 1 is actually closer to SysV >>>>>>> (although still bigger). >>>>>>> >>>>>>>> Even assuming systemd code is as mature as sysvinit or >>>>>>>> openrc (though I doubt this) you can calculate >>>>>>>> probabilities of segfaults yourself easily. >>>>>>> >>>>>>> I don't care about probabilities; >>>>>> >>>>>> If you do not care (= do not now anything) about probabilities >>>>>> (and mathematics, in general), you just unable to understand >>>>>> that debugging a program with 200K lines of code take >>>>>> >>>>>> 200000!/(10000!)^20 >>>>>> >>>>>> more time than debugging of 20 different programs with 10K >>>>>> lines of code. You can try to calculate that number yourself >>>>>> but I quite sure that if the latter can take, say, 20 days, the >>>>>> former can take millions of years. >>>>>> >>>>>> It is all the probability! Or, to be more precise, >>>>>> combinatorics. >>>>> >>>>> My PhD thesis (which I will defend in a few weeks) is in >>>>> computer science, specifically computational geometry and >>>>> combinatorics. >>>> >>>> It is even more shameful for you to not understand such a simple >>>> facts from elementary probability theory (which is mostly based on >>>> combinatorics). > TBH I don't understand your estimate. Where did permutations come > from? are you comparing all the different combinations of lines of > code? > >> I just wanted to convey that, if an involved program is n times longer, >> than another one, it does not, in general, true that it will take only >> n times more time to find a bug. The dependence here would be nonlinear >> and with much more steep growth than the linear one, just because all >> the possible ways to go wrong grows proportional to permutations, not >> necessary of lines but at least of some other units whose number is >> roughly proportional to the number of lines. As I see it: Suppose we can have b different paths in each unit of code (for 1 unit having 1 input we would get b possible outputs). Suppose we have A different states after executing N units, but we have 1 more unit to execute at the end. Executing the final unit A times with different initial states we get b outcomes for each of the A initial states. Now we have A*b possible final states, so we get b^(n+1) states after executing N+1 units. If there are s mistakes we can make in each unit we will get 2^s paths in each unit. Finally 2^(s*N)I may be glitching though. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQEcBAEBCAAGBQJTBxE1AAoJEK64IL1uI2haoFMH/Ag/JJEh3OZBUf6lR+bp3iV5 HQOh+V+J2vclDcOqc2AQDEYFIR++3yo1iqlw9vW8pI2wSRvcia2j0fs1M/kamvhH xJC+yaeDQ9dy544PQS/y1vnSxK4nqyTybZ0/yj4liRofkY+4Gyn+hZanPO6R04cn UDXH/K0uvlhSyIaFRkzmCD8wrEH/slPPGtB3+GwpSckM4MUwtNsjLyng78+AhX9j A2m5pKrFVHnE09XqGKm+G4La2LeNy33fOTgfL4O/s8q8xCRkIuf/B2mEO/76eUwn QYjSN77sLtDFfxJSfO46Gch3nA3obcKBVqZkVtqy5Z83m3OjqwKT7xu4yLLHM4Y= =4zVQ -----END PGP SIGNATURE-----