[Pharo-users] Some Metacello issue
Hi, after having made some new releases I have an odd Metacello issue and I am not sure how to debug it. This happens on my legacy Pharo3/Pharo4 images that I still need to support in deployment. It seems to be related to a dependency chain of "app"->VoyageMongo->Magritte3 3.5.0 and somehow version '3.1.1.1' of Magritte3 doing something with Grease #stable and not finding a 'Core' group. Any idea how to debug it or if any configuration changed recently? It also only seems to happen if there are two dependency chains that try to load the same VoyageMongo version. Broken: pharo Pharo3.image config http://smalltalkhub.com/mc/osmocom/Osmocom/main ConfigurationOfOsmoUniverse --install=bleedingEdge Error: Name not found: Core MetacelloMCVersionSpec(Object)>>error: MetacelloMCVersionSpec(MetacelloVersionSpec)>>resolveToLoadableSpec:forLoad:forMap:packages: in Block: [ ^ self error: 'Name not found: ' , aString ] MetacelloMCVersionSpec(MetacelloVersionSpec)>>packageNamed:forLoad:forMap:ifAbsent: in Block: [ ... Dictionary>>at:ifAbsent: MetacelloMCVersionSpec(MetacelloVersionSpec)>>packageNamed:forLoad:forMap:ifAbsent: MetacelloMCVersionSpec(MetacelloVersionSpec)>>resolveToLoadableSpec:forLoad:forMap:packages: MetacelloMCVersionSpec(MetacelloVersionSpec)>>resolveToLoadableSpecs:forLoad:map: in Block: [ :req | ... Array(SequenceableCollection)>>do: MetacelloMCVersionSpec(MetacelloVersionSpec)>>resolveToLoadableSpecs:forLoad:map: MetacelloMCVersionSpec(MetacelloVersionSpec)>>resolveToLoadableSpecs: MetacelloMCVersionSpec(MetacelloVersionSpec)>>expandToLoadableSpecNames: in Block: [ :cache | ... MetacelloPharo30Platform(MetacelloPlatform)>>stackCacheFor:cacheClass:at:doing: in Block: [ :dict | ... MetacelloPharo30Platform(MetacelloPlatform)>>useStackCacheDuring:defaultDictionary: in Block: [ ^ aBlock value: dict ] BlockClosure>>on:do: ... Working: Metacello new configuration: 'Magritte3'; repository: 'http://www.smalltalkhub.com/mc/Magritte/Magritte3/main'; version: '3.5.0'; load
Re: [Pharo-users] Some Metacello issue
> On 7. Jun 2017, at 14:09, Stephan Eggermont wrote: > > Never refer to fixed versions unless you know why (you need to avoid a > specific bug fix). When wanting to have repeatable builds (e.g. for bugfixes) and in the absence of other means to lock/define versions externally, I think using a fixed version is the way to go. > What is most likely is that there is some overconstrained configuration. > Does your ConfigurationOfVoyageMongo or one of the configurations it > pulls in refer to different versions of grease or magritte? Another > issue can be that there are older configurations already loaded that > conflict with the newest ones. Indeed, the ConfigurationOfMongoTalk > is broken, refering to a fixed and older version of Grease. > ConfigurationOfVoyageMongo should probably be using #'release3' of > Magritte, but that doesn't break it. Right. So we have a "OsmocomUniverse" build job that pulls in all the apps into a single image. This helps to make API modifications and not forget any of the client code. The configuration has such dependencies: ConfigurationOfOsmocomUniverse -> ConfigurationOfHLR -> ConfigurationOfVoyageMongo -> Mongotalk -> Grease A -> Magritte3 -> Grease B -> ConfigurationOfSMPPRouter -> ConfigurationOfVoyageMongo -> Mongotalk -> Grease A -> Magritte3 -> Grease B What happens is that somehow "Grease A" gets loaded, then "Grease B" and when it is time for "Grease A" again.. the system kind of explodes and this is for Pharo3 and Pharo6. Now the question to me is.. why is this coming up right now? Did MongoTalk change or Magritte3 or something else? Is there an easy way for Metacello to try a mirror instead of the original, e.g. to inject an older ConfigurationOfMagritte3? holger
Re: [Pharo-users] Pharo6 server deployment and no home directory
> On 17. Apr 2017, at 21:30, Juraj Kubelka wrote: > > Hi Holger, Hey Juraj! > Basically it tests `FileLocator home exists`, if false, it does not touch > disk. We could also add a #disablePersistence method if necessary. I thought I tested it but somehow it is broken (again)? Looks like FileLocator>>#exists fails instead of answering false? Can you reproduce it? $ unset HOME $ pharo ... Error: Can't find the requested origin UnixResolver(PlatformResolver)>>cantFindOriginError [ self cantFindOriginError ] in UnixResolver(PlatformResolver)>>directoryFromEnvVariableNamed: in Block: [ self cantFindOriginError ] UnixResolver(PlatformResolver)>>directoryFromEnvVariableNamed:or: UnixResolver(PlatformResolver)>>directoryFromEnvVariableNamed: UnixResolver>>home [ self home / '.config' ] in UnixResolver>>preferences in Block: [ self home / '.config' ] UnixResolver(PlatformResolver)>>directoryFromEnvVariableNamed:or: UnixResolver>>preferences UnixResolver(FileSystemResolver)>>resolve: SystemResolver(FileSystemResolver)>>unknownOrigin: SystemResolver(FileSystemResolver)>>resolve: InteractiveResolver>>unknownOrigin: [ self unknownOrigin: origin ] in InteractiveResolver>>resolve: in Block: [ self unknownOrigin: origin ] IdentityDictionary(Dictionary)>>at:ifAbsent: InteractiveResolver>>resolve: FileLocator>>resolve FileLocator(AbstractFileReference)>>exists GlobalIdentifierStonPersistence(GlobalIdentifierPersistence)>>shouldCallPreviousPersistence GlobalIdentifierStonPersistence(GlobalIdentifierPersistence)>>ensure: GlobalIdentifier>>ensure GlobalIdentifier class>>initializeUniqueInstance GlobalIdentifier class>>uniqueInstance SystemSettingsPersistence class>>resumeSystemSettings [ :persistence | persistence resumeSystemSettings ] in PharoCommandLineHandler>>runPreferences in Block: [ :persistence | persistence resumeSystemSettings ...etc... BlockClosure>>cull: SystemDictionary(Dictionary)>>at:ifPresent: SmalltalkImage>>at:ifPresent: PharoCommandLineHandler>>runPreferences PharoCommandLineHandler>>activate PharoCommandLineHandler class(CommandLineHandler class)>>activateWith:
Re: [Pharo-users] Pharo6 server deployment and no home directory
> On 8. Jun 2017, at 23:16, Holger Freyther wrote: > > Hey Juraj! Hey! this will most likely block more people trying to deploy a headless Pharo6 image but I am not so sure how to properly fix it. > $ unset HOME > $ pharo ... > Error: Can't find the requested origin ... > UnixResolver(PlatformResolver)>>directoryFromEnvVariableNamed: $HOME is not set so cantFindOriginError will be executed. > UnixResolver>>home > [ self home / '.config' ] in UnixResolver>>preferences in Block: [ self home > / '.config' ] XDG_CONFIG_DIR can not be found and then "self home" will be executed... > FileLocator(AbstractFileReference)>>exists (FileLocator preferences / '.config' / ...) resolve exists a.) Behave like unix and resolve $HOME to '' $ unset HOME $ echo $HOME/.config /.config self home / '.config' => '/.config' b.) FileLocator>>#exists If something can not be resolved, one can argue that it doesn't exist? So I wonder if the exception should be catched and false be returned? c.) ??? I don't see an obvious/good approach. Do you? holger
Re: [Pharo-users] Pharo6 server deployment and no home directory
> On 9. Jun 2017, at 11:09, Holger Freyther wrote: > > > a.) Behave like unix and resolve $HOME to '' > > $ unset HOME > $ echo $HOME/.config > /.config > > self home / '.config' => '/.config' Implementing UnixResolver>>#home as home ^ self directoryFromEnvVariableNamed: 'HOME' or: [self resolveString: ''] will lead to something creating /.config/pharo (if possible). By itself this change is not good enough but when adding --no-default-preferences. Comments? Opinions? holger
Re: [Pharo-users] [Pharo-dev] Pharo6 server deployment and no home directory
> On 9. Jun 2017, at 13:26, Sven Van Caekenberghe wrote: Hey, > Why would $HOME not be set ? In this specific case runit doesn't export HOME when starting my service but looking at systemd and picking a random service like exim4.service I see: $ strings /proc/946/environ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LANG=C _SYSTEMCTL_SKIP_REDIRECT=true PWD=/ I think it is safe to assume this applies to other services started by systemd as well. > And if it is not set / seems like a reasonable default. In Shell it would be "" but with >>resolveString: '' it seems to be / but that seems to be fair enough. E.g. as {home} / '.config' is used anyway. holger
Re: [Pharo-users] Pharo6 server deployment and no home directory
> On 12. Jun 2017, at 16:41, Juraj Kubelka wrote: > > Hi Holger, > > I have an impression that it can be solved by using > "--no-default-preferences” option: > ./pharo Pharo.image --no-default-preferences > > Is it a good solution for you? Or do you need to load some preferences? Right, unsetting HOME and then using --no-default-preferences makes the error go away. > Another option could be implementing an error signal, e.g., > CantFindOriginError in the cantFindOriginError method and catch this in the > GlobalIdentifier object. I wonder if "FileLocator home exists" should really throw an exception in case {home} can not be resolved. a.) Just because it can not be resolved, it might still exist? b.) If it can not be resolved it doesn't exist from an image point of view? holger
Re: [Pharo-users] UUIDGenerator
> On 17. Jun 2017, at 20:51, horrido wrote: Hey! > > Is there even one shred of documentation anywhere that shows how to use > UUIDGenerator? A thorough Google search reveals nothing! All I find are > reference materials. I'd like to see just one working code sample, no matter > how simple. in these cases I use "Analyze->Class refs" on the class and it brings up the test case for the UUIDGenerator. Have a look at the >>#setUp and then the tests? have a nice weekend holger
Re: [Pharo-users] Critical issues for Dr. Geo on P6
> On 21. Jul 2017, at 11:19, Hilaire wrote: > > Hello people, Hi! > Here are a few critical issues due to bugs or lack of information or feature > for porting Dr. Geo to P6. There were others critical issues from P6 but were > resolved and will be hopefully integrated, when ? > > • Minimal Dr. Geo image I am concerned with the growth from Pharo3->Pharo6 as well and triggered by your mail looked at it again. In contrast to ImageCleaner>>#cleanUpForRelease I want to keep the Monticello/Metacello packages for now (and don't want to unload them after test). I call this with "pharo Image eval clean.st" and for whatever reason if I pass --save bad things (can't talk to mongod) happen during my system test but that is another story (maybe the save is executed at the _next_ image start again). World closeAllWindowsDiscardingChanges. (RPackage organizer packages select: [:package | package packageName includesSubstring: 'Test']) do: [:each | each removeFromSystem ]. (RPackage organizer packages select: [:package | package packageName beginsWith: 'Versionner']) do: [:each | each removeFromSystem ]. (RPackage organizer packages select: [:package | package packageName beginsWith: 'ProfStef']) do: [:each | each removeFromSystem ]. (RPackage organizer packages select: [:package | package packageName beginsWith: 'Ice']) do: [:each | each removeFromSystem ]. (RPackage organizer packages select: [:package | package packageName beginsWith: 'BaselineOf']) do: [:each | each removeFromSystem ]. (RPackage organizer packages select: [:package | package packageName beginsWith: 'ConfigurationOf']) do: [:each | each removeFromSystem ]. (RPackage organizer packages select: [:package | package packageName endsWith: '-Help']) do: [:each | each removeFromSystem ]. (RPackage organizer packages select: [:package | package packageName endsWith: 'Examples']) do: [:each | each removeFromSystem ]. ImageCleaner new cleanUpForRelease. Smalltalk snapshot: true andQuit: true.
[Pharo-users] "Leak"/reference hunting with RefsHunter
Hi, as I didn't remember the name of the tool and maybe as future reference to myself. I have installed RefsHunter from the catalogue and it helped me a lot. I was looking at the memory consumption of my Pharo6.0 image and did something crazy like counting how many objects exist of a specific class (Object allSubInstances copy do.. and put that into a dictionary). I noticed that for the ASN1 model the nodes of the parse tree survived. Yesterday I tried to use the pointer explorer but then most references are held by the UI code and I gave up after a bit. Today I found the RefsHunter again and something as simple (found through the class comments) | rh | rh := RefsHunter snapshot. rh wayFrom: ASN1AssignmentNode allInstances first to: myAsn1Model Brings me to a list of references and when searching the list from the end to the beginning gives a pretty good picture of where things go wrong. Great to have a platform that allows to walk the heap and great that there are tools that make it manageable! have a nice weekend holger
Re: [Pharo-users] "Leak"/reference hunting with RefsHunter
> On 22. Jul 2017, at 20:40, Pavel Krivanek wrote: > > In the most of standard cases you can use ReferenceFinder. It is in the > catalog too and I think that we should integrate it into the Pharo because it > is extremely useful for the memory leaks detection. It does not require > snapshots and I firstly try it before applying of the RefsHunter (and I say > that as the author of the RefsHunter ;-)) Cool and thank you for RefsHunter! I found RefsHunter by typing "leak". Could we get this attached to the ReferenceFinder in the catalog as well? holger
[Pharo-users] Object count/memory usage in Pharo6 WAS Re: Critical issues for Dr. Geo on P6
> On 22. Jul 2017, at 14:00, Hilaire wrote: > > Hi Holger, > > With Pharo3 I was proceeding like this, by uninstallation of packages. > However there is the promise of building from a minimal image, but it is not > documented AFAIK. > > By curiosity, what is the size of your resulting image. I see 31mb image size for Pharo3 and 49mb for Pharo6. The changes file is about the same. From the top command on a test system. "pharo" is a Pharo6 VM and image and pharo-vm is the pharo3 system. PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND 4206 root 20 0 1056684 74036 1600 S 5.7 1.2 1582:15 pharo-vm 10459 root 20 0 104040 90348 2124 S 5.0 1.5 369:00.29 pharo I looked a bit at memory usage and see if some random changes make a difference * Shrink ZnMimeType ExtensionMaps to a minimum when not handling extensions? * ASTCache reset to get rid of the spotter cache. Not sure if the SessionManager runs ASTCache>>#shutDown properly. It looks like it should but it doesn't? * ByteString compaction? I did strs := ByteString allInstances. str -> strs asSet. The numbers on a not fully clean image are 99874 vs. 42139. I wonder in which hell I end if I use >>#becomeForward: to compact it (I tried this once for GNU Smalltalk and I am sure others tried it before). I think anyone using == on String is using the wrong class? Any good arguments against doing it? I am moving my Pharo6 image into production right now and then will look into setting up CI for Pharo7 with a minimal image. cheers holger
Re: [Pharo-users] Critical issues for Dr. Geo on P6
> On 22. Jul 2017, at 12:07, Stephane Ducasse wrote: > > Hi holger Hey Stef! > "the growth from Pharo3->Pharo6" us too :) > this is why we invested in the bootstrap and this is why we will > remove (and we started) packages and classes. > And this is also why we will continue to repackage the system. yes this is very fascinating for Pharo7. It would be nice if we could track size (image+changes but also ram usage, object count) over time to see our progress. I recently added ordinary metric upload to my "bob-bench.org" test and metric tracking. ;) holger
Re: [Pharo-users] Parser failure on FFI pragmas declaration in Pharo 5
> On 17. Aug 2017, at 19:37, Denis Kudriashov wrote: Hey! > Yes. > > Also simple solution can be to override compiler of problem classes to return > old compiler. > > I know it is better to rewrite code but it can be not simple task when there > are a lot of ffi-methods. I ran into this problem with the wonderful (as it had a SHA256 implementation) NaCl bindings. Pharo5: * RBParser still has the currentScope variable and can import it * Syntax highlighting ends in an exception (which I disabled) Pharo6: * RBParser doesn't have currentScope anymore so I patched it out * Syntax highlighting seems to work fine @Esteban: Would you accept a change to the FFI-Pharo5Compat to not use the currentScope variable/reduce error checking? Or would you accept it in a FFI-Pharo6Compat package? I think it would help to be able to load the Nacl code in Pharo6 and then fix it? what do you think? holger
[Pharo-users] ZnConstants class>>#httpStatusCodes and cloudflare
Hi, I am currently using ZnClient to fetch data from a service behind "cloudflare" and sometimes the real/origin backend is unreachable/fails. Cloudflare has added additional[1] 5XX codes and ZnStatusLine>>#code: will signal an ZnUnknownHttpStatusCode because of that. I wonder how ZnClient should deal with these errors? IIRC the HTTP RFC specifies error classes and without knowing what "525" means one can know that the request was not successful and that the server is at "fault"? Should ZnStatusLine handle an unknown code gracefully? E.g. >>#reason: seems to do that already? have a nice weekend holger [1] https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#Cloudflare
Re: [Pharo-users] ZnConstants class>>#httpStatusCodes and cloudflare
> On 18. Aug 2017, at 20:14, Sven Van Caekenberghe wrote: > > Hi Holger, > > It is probably not a good idea to be too strict here. I committed the > following to #bleedingEdge thank you! holger
Re: [Pharo-users] Parser failure on FFI pragmas declaration in Pharo 5
> On 18. Aug 2017, at 16:46, Denis Kudriashov wrote: > > @Esteban: Would you accept a change to the FFI-Pharo5Compat to not use the > currentScope variable/reduce error checking? Or would you accept it in a > FFI-Pharo6Compat package? > I think it would help to be able to load the Nacl code in Pharo6 and then fix > it? > > But you can just switch default compiler. Is not works for you? yes. But I think it gets more difficult in Pharo7? So being able to load old code to rewrite it would be nice. :) holger
[Pharo-users] "Leaking" CommandLineHandler when running headless image
Hi, I am currently trying to run one of my images as non-root and related to that look into keeping changes in a different directory or not writing them at all. While looking at a write failure I saw some paths being logged that I used during CI to load code and should have been GCed. Observation: Pharo --headless My.image eval "Smalltalk garbageCollect. CommandLineHandler allSubInstances size" 84 Pharo --headfull My.image eval "Smalltalk garbageCollect. CommandLineHandler allSubInstances size" 2 As part of the CommandLineHandler allSubInstances there are eight LoadUpdatesCommandLineHandler still active. I would assume that a full GC should have collected them by now (some of them being created in May). In a headfull image they disappear quickly I varied the execution a bit: Pharo --headless RoamingHub.image eval --no-quit "[ Smalltalk garbageCollect. FileStream stdout print: CommandLineHandler allSubInstances size; lf. Smalltalk snapshot: false andQuit: true] fork" 0 Hypothesis: * Command line handlers call >>#snapshot:andQuit:. * Image resumes in this process * New session created * Command line handlers execute * Calls snapshot:andQuit: * Image resumes.. * New session created * Command line handlers execute ... Can this be true? I think the proposal to start the image differently would help here? Any comments/ideas? holger
Re: [Pharo-users] Smalltalk gets a reference in fastcompany Kay interview
> On 20. Sep 2017, at 06:43, Offray wrote: Hey! > Its a shame nog being able to read the article, because it is not posted in > the open web and I don't have Facebook or Google to pay with my privacy for > the "privilege" of reading Fast Company. sign-up by email is possible as well, probably with a one time email address? The link is below their google sign-in button. cheers holger
Re: [Pharo-users] Deploying on Linux with LibC version < 2.15
> On 4. Oct 2017, at 17:39, Cyril Ferlicot wrote: > > Hi, > > I am migrating some applications from Pharo 4 to Pharo 6. The new > deployment of those applications needs to work on linux with LibC < > 2.15. With Pharo 4 there was a special VM[1]. I do not see such VM for > Pharo 6. Which OS has such old versions of LibC? Which LSB standard does it support?
Re: [Pharo-users] Deploying on Linux with LibC version < 2.15
> On 5. Oct 2017, at 18:08, Bruce O'Neel wrote: > > Hi, Hi! > Well, our redhat 6.9 systems have 2.12, so, that qualifies. > > And yes, we still have RedHat 6, and 6.9 was released only 6 months ago! It > will finish extended support in a mind-blowing 7 more years in 2024. > > Redhat 5, still supported for another 3 years till 2020 has glibc 2.5. for a brief moment you really scared me. I thought you referred to RedHat Linux 6 which was released in 1999 but you are referring to Red Hat Enterprise Linux (RHEL). As it turns out we have "latest" (as soon as a commit is made to pharo-vm.git) and hand curated "stable" (hand created source tarballs, rebuilt from a git commit of opensmalltalk-vm) for RHEL6 and CentOS 6. CentOS 6.x: # Add the repo $ yum-config-manager --add-repo http://download.opensuse.org/repositories/devel:/languages:/pharo:/latest/CentOS_6/devel:languages:pharo:latest.repo OR (for stable): http://download.opensuse.org/repositories/devel:/languages:/pharo:/latest/CentOS_6/devel:languages:pharo:stable.repo # Install 32bit packages (with X11 dependency for *-ui or not) $ yum install pharo6-32-ui.i686 or pharo6-32.i386 # Install 64bit packages $ yum install pharo6-64-ui.x86_64 pharo6-64.x86_64
Re: [Pharo-users] Deploying on Linux with LibC version < 2.15
> On 5. Oct 2017, at 22:22, Cyril Ferlicot wrote: > > > Your instructions describes the steps for CentOS 6.x. Are they the > exact same steps for RHEL6? I don't have a RHEL subscription but I assumed they are similar but OBS even produces RHEL packages: https://download.opensuse.org/repositories/devel:/languages:/pharo:/stable/RHEL_6/ > Also, I am trying this on a CentOS 6.0 virtual machine and I get this error : > > [centoslive@livecd test]$ yum-config-manager --add-repo > http://download.opensuse.org/repositories/devel:/languages:/pharo:/latest/CentOS_6/devel:languages:pharo:latest.repo > Loaded plugins: fastestmirror, refresh-packagekit > Usage: "yum-config-manager [options] [section] > > Command line error: no such option: --add-repo > [centoslive@livecd test]$ > > > Is there something else to install before? Not sure. Even the RHEL documentation mentions --add-repo exists. I assume you can download the .repo[1] file and put it in the right directory? holger [1] RHEL6: https://download.opensuse.org/repositories/devel:/languages:/pharo:/stable/RHEL_6/devel:languages:pharo:stable.repo CentOS6: https://download.opensuse.org/repositories/devel:/languages:/pharo:/stable/CentOS_6/devel:languages:pharo:stable.repo
[Pharo-users] "sourcetrail" a code browser for C++/Java
Hi, I was just watching some videos and stumbled across sourcetrail[1][2]. The visualizations look pretty neat. Maybe it can be used an inspiration? cheers holger [1] https://www.youtube.com/watch?v=r8S6V6U5Vr4 [2] https://www.sourcetrail.com
Re: [Pharo-users] "sourcetrail" a code browser for C++/Java
> On 1. Nov 2017, at 06:03, rainer.wink...@poaceae.de wrote: > > > Hi Holger, > > > So I am still using the Open Source Software Exploration Tool Moose2Model > (www.moose2model.org) I developed. I use it for Smalltalk and SAP ABAP. It > can be used in a similar way to Sourcetrail, but gives also the option to > build commented and customized maps that can be stored and updated with new > coding versions. Sourcetrail on the other hand gives much more details on the > analyzed coding. ahh cool! Will watch your ESUG talk. What would be missing for Moose or Pharo? cheers holger
Re: [Pharo-users] How to deploy headless app without changes and source files?
> On 7. Jun 2017, at 17:11, Sven Van Caekenberghe wrote: Hi, > > Note: it might be possible that some code fails due to missing method > sources, YMMV. does the exception handling code need the sources files? Anecdotical I had some issues (exception handling causing exception handling taking 99% CPU and filling up the debug log) which went away after installing the source files. cheers holger
Re: [Pharo-users] How to write a little REPL
> On 27. Nov 2017, at 05:38, Stephane Ducasse wrote: > > Hi Hey! > I'm working on a mini scheme implementation and I would like to add a REPL and > I wonder how I can super easily get a read line. The easiest might just be to use "rlwrap your-interpreter"? But I think you want to allow multi-line input. So either link libreadline (GPL) or libedit? holger
Re: [Pharo-users] Iceberg regex
> On 26. Dec 2017, at 23:02, Ian Ian wrote: Hey! > Yes. That is how it is on github but there are private hit serveers. My > own, for example. > > I found it usable after I changed the regex in both classes where it exists. I tried to replace some of regexps with ZnUrl but some of the ssh/scp/git urls are not valid URIs and then tweaked it to include the port number. Maybe someone else has another pass it? In PR#473[1] I mentioned git:g...@domin.com:path as a non valid URL that is happily accepted by git itself. :} holger [1] https://github.com/pharo-vcs/iceberg/pull/473
[Pharo-users] StampClient produce only and dealing with heartbeat
Dear Sven, I started to use the StampClient and intend to use it to produce data but for heartbeat and other parts I need to read from the socket as well. I wonder about the best strategy to deal with it. The naive approach. [ | event sendFrame | event := sharedQueue next. sendFrame := self newSendFrameTo: queueName sendFrame text: event convertToText. stampClient write: sendFrame. ] fork. But now the StampClient enforces a non-zero hearbeat.. so I could write something like this [ | event sendFrame | event := sharedQueue nextWaitFor: stampClient timeout * 3. event isNil ifTrue: [stampClient writeHeartBeat] ifFalse: [self convertAndSendEvent: event]. ] fork. But now I face the issue (but maybe I had it before as well) that the server will itself send an empty frame as its heartbeat function and I need to read it. So I could write... [ event := sharedQueue... "try to read all pending events? How often to repeat it read everything??" stampClient readNextFrame. ... ] fork Or to make it more involved? And create a reader and writer? procConsume := [ [stampClient runWith: [:msg | " do nothing " ]] ifCurtailed: [connectionClosed...handling]. ] fork. procProduce := [ [ | event sendFrame | event := sharedQueue next. sendFrame := self newSendFrameTo: queueName sendFrame text: event convertToText. stampClient write: sendFrame. ] ensure: [procConsume... do what exactly? ] fork. So the last options seems to be the best. But how to deal with with re-connects? How to not have have "procConsume" write the heartbeat data in the middle of the produced event? After all How did you solve that? Is the problem space clear enough? holger
Re: [Pharo-users] StampClient produce only and dealing with heartbeat
> On 19 May 2016, at 10:40, Sven Van Caekenberghe wrote: > > Hi Holger, Dear Sven, > However, you need a regular opportunity to send something out. Thinking out > loud, what about something like > > StampClient>>#runWith: receiveBlock do: sendBlock > > where receiveBlock is like it is now, and sendBlock is called regularly, > basically when the loop goes through another cycle, to give you the > opportunity to send something, being sure to have exclusive access. > > In the sendBlock you could query your sharedQueue that is being filled by > another process, properly MP safe. > > The invocation of #runWith:do: should of course be forked. > > Does that make sense ? It makes sense for my unacknowledged SEND but I see several issues for a general scheme: a.) If write / receive ratio is not equal and I block in the send then I will not receive quickly enough. And if we block on receive (with the *TimedOut) we will not write enough. This is one general architecture issue I seem to circle around[1]. I should not have block one or the other. b.) Integration with ACKed sends (putting a receipt, reading a receipt-id). Is there a generic way to handle it? E.g. I would keep an event in the SharedQueue until it has been acked (and detect timeouts or such). Last but not least. How do you handle the ConnectionClosed and do the re-connect? It seems that >>#runWith: will exit iff ConnectionClosed signal has been raised. Will you respawn the process? Will you create another StampClient and re-execute? sorry, more these are more questions than answers. I have a local client that is similar to >>#runWith:do: (but calls receive from within the send routine). kind regards holger [1] With POSIX/C there is select(), in Windows WaitForMultipleObjects and maybe with erlang the selective receive. Now it is not very object orientated but in pseudo erlang syntax. receive FrameReady -> self handleFrame: arg; FrameToSend -> self writeEvent: arg; Disconnect -> self reconnect. ... after self timeout -> self checkRecvHeartbeatOrSendIt end I could emulate it by spawning multiple processes on "receive", creating a queue, having a semaphore.. but I don't know if I want to limit it to sockets...
Re: [Pharo-users] New Success Story: Sysmocom: Free Software for Mobile Communication
> On 23 May 2016, at 22:28, Esteban A. Maringolo wrote: > > It is really cool and impressive. > > Is any of the supporting libraries open-source? (like ASN.1) http://smalltalkhub.com/#!/~NorbertHartl/ASN1 The model/ASN1 parser is probably one of the most complete FOSS parser, the library is tied to DER/BER right now and it would be nice if someone adds *PER/JSON/XML encoding/decoding. holger
Re: [Pharo-users] Problem with Mongo on Pharo5 ("collection already exists")
> On 25 May 2016, at 15:42, Esteban Lorenzano wrote: > > Hi, > No I don’t… in part because is not me who make that code, but also because it > is expected: after, you have: > > getCollection: aString > ^ [ self addCollection: aString capped: false size: nil max: nil ] > on: MongoCollectionAlreadyExists > do: [ :err | > MongoCollection database: self name: aString ] > > so the idea is to refine the error to separate MongoCollectionAlreadyExists > so it can later be catch and handled properly. when adding capped collection support it seemed like a good idea to fail if a collection already exists that might not be capped, but given the follow up issues I wonder if I/we should restore the original behavior that just ignored all errors? holger
[Pharo-users] Saving to local git and "Loading all file names from http://...pharo5/inbox"
Hi, every time I save a local package using gitfiletree:// it tries to download from the pharo5 inbox. Is this to be expected? I do not have the inbox associated with that package though? Can the version number resolving be changed? kind regards holger
Re: [Pharo-users] Saving to local git and "Loading all file names from http://...pharo5/inbox"
> On 29 May 2016, at 09:58, Sven Van Caekenberghe wrote: > > >> >> For some reason the package manager is refreshing all packages. I don't know >> why it happens, and it's quite annoying (because it slows down commits), but >> it doesn't cause any actual problems, so don't worry about it too much. > > As I understand it, what happens is the following: before you commit to your > MC repo, you have to find the next version number; a check is then done in > all relevant repos; the cached content is not used, but an actual refresh is > done. All this is so that my .5 would not conflict with someone else .5 - the > chance that this happens is very small, and the check does not really prevent > it. I assumed that but can it be limited to the Repositories that are associated with the package? I am afraid that next time I travel I can not commit to my local repository (and ofcourse the speed part). :) holger
[Pharo-users] Mongo-BSON OID LargePositiveInteger increase
Hi, I tried to reach the author for several weeks but he doesn't seem to respond so I am trying to reach a wider audience to either confirm my suspicion or to be corrected. In http://smalltalkhub.com/#!/~MongoTalkTeam/mongotalk/diff/Mongo-BSON-HenrikSperreJohansen.43 the following change is done: + "digitAdd: wraps around to 0 when result would overflow" + ^ counter := counter + ifNil:[self newCounter] + ifNotNil:[counter digitAdd: 1]! - ^ counter := (counter + 1) \\ 16rFF! The old code has overflow checking, the new code makes a statement I don't think is true. counter is LargePositiveInteger new: 3 to use three bytes. So given the above code and the experiment (yes I could just add a bigger number) | id | id := LargePositiveInteger. 1 to: (16777215 + 50) do: [:each | id := id digitAdd: 1]. id. Given the comment it should overflow and the value be 50? This is not what the result is. So shall the truncation be added again or at least the comment be updated? The code will go from LargePositiveInteger to SmallInteger when overflowing from three to four bytes but luckily >>#value ... replaceFrom: 1 to: 3with: self class counterNext startingAt: 1 will even work when counterNext returns a SmallInteger. But given the old code and the comment in the new code this is does not seem to function as intended? kind regards holger PS: The other part is that >>#newCounter doesn't seem to be ever executed. On first load >>#initialize will call >>#reset and on >>#shutDown: calls reset. So the code to "randomize" the initial counter doesn't seem to work.
Re: [Pharo-users] Mongo-BSON OID LargePositiveInteger increase
> On 30 May 2016, at 19:03, Henrik Johansen > wrote: > Hi! > It's starting to come back to me; IIRC, + will normalize results to > SmallIntegers, digitAdd: will not. not with Pharo5: ((LargePositiveInteger new: 3) digitAdd: SmallInteger maxVal - 10) class => SmallInteger > I thought it would be nice to use a single > replaceFrom:to:with:startingAt: > call to insert the entire counter; however, I didn't bench that particular > part. yes, it is nice > So while the rewrite overall gained a small amount of speed, it turns out > digitAdd: is quite slow (even though it's a primitive), so reverting to using > Smallinteger arithmetic for the counter, and inserting the counter a digit at > a time is most likely worth it: > > "Pharo4, LargeInteger counter" > [OID new] bench '1,194,004 per second' > > "Pharo4, reverted to SmallInteger counter" > [OID new] bench '1,681,203 per second' cool! Change looks good but please remove the obsolete >>#digitAdd: comment. thanks a lot!! holger
Re: [Pharo-users] [ANN] JSONWebToken
> On 22 Jul 2016, at 16:17, Norbert Hartl wrote: > Hi! > Taking the assumption of having 20 service images, every image would need to > get back to A in order to check authorization information. The more services > images you have the more load it will put on A. In a JWT use case scenario > the same would look like > > 1. client C authenticates and receives a JWT containing authorization > information. The token is signed by A > 2. client C hands out JWT to service S > 3. S checks the signature of A and knows that the authorization information > contained is valid. > 4. S grants C access thank you for the information! I have one rather specific question. How is the token normally transported from C to S? Part of the body/data of a POST/PUT/GET? A custom header inside the HTTP request? kind regards holger
[Pharo-users] Script to migrate all mcz packages to git?
Hi, I think I have seen something but can't find it right now. I would like to move to git but preserve the history (and my commit messages and the commit date/time). Is there a script that goes through all versions of a package and copies them to git repository? Will it be able to preserve the original commit date? thank you holger
[Pharo-users] State of ZeroMQ bindings and polling
Hi, I wonder if someone is using ZeroMQ with Pharo. Is it in production? How does it work? I specially wonder about the polling integration. Is there some integration? How does it work? h.
[Pharo-users] SUnit and "data driven" tests
Hi, I have a test/algorithm that I would like to test with different sets of input (and matching expected) output. Let's imagine I write a protocol library for the RTP streaming protocol and would like to verify my implementation of sequence number wrapping and jitter delay, e.g. the difficulty is to deal with the sequence number counter wrapping. My test-input is an array of sequence numbers and their arrival time and the expected result would be a calculation of "packet loss", jitter. I would like SUnit to call my testcase multiple times with different data being available. Is something like this supported with SUnit? thank you holger
Re: [Pharo-users] [Garage] How to espace sql special chars in a query?
> On 12 Aug 2016, at 10:34, Blondeau Vincent > wrote: > > Hello, Hi > I am looking for a method that escape special characters for SQL queries. > E.g.: I want to escape : ‘ in a where expression: ‘….Where field1 = ‘’‘, > myvariable , ‘‘’ …..’ with myvariable := ‘don’’t do’. > I am using garage and haven't found it in the package. Does someone know > where I can found it? I have used prepared statements for that. They allow me to bind the variable to the statement and I don't have to worry about escaping (at least that is the theory). holger
Re: [Pharo-users] [Garage] How to espace sql special chars in a query?
> On 12 Aug 2016, at 12:10, Blondeau Vincent > wrote: > > BTW, even with prepared statements, it doesn't work either: > SQL query : EXECUTE preparedStmtd2qbaa1ap7ceiaq643sxlkyyw('Quand > l'utilisateur est connecté sur "son serveur"', '1277') > -> 'ERREUR: erreur de syntaxe sur ou près de « utilisateur » au caractère > 56' (Syntax error near char 56) Which database? What does the statement look like? What are the bound variables? E.g. is the string supposed to be UTF8 or a blob?
Re: [Pharo-users] [Garage] How to espace sql special chars in a query?
> On 12 Aug 2016, at 13:36, Blondeau Vincent > wrote: > > > I think that ' close the EXECUTE query and is not escaped by garage. you are right. The statement is very sub-optimal (but should be easy to fix). "If not it means by the moment that we are a named prepared statement and we execute that" argumentsString := arguments ifEmpty: [ '' ] ifNotEmpty: [ '(''', (''', ''' join: (arguments collect: #asString)), ''')' ]. ^ 'EXECUTE ', (self propertyAt: #statementId), argumentsString In GNU Smalltalk[1] I had used FFI to use libpg/PQexecParams[2] that allows to pass query and parameters separately. Garage implements the wire protocol but it should be possible to pass the parameters separately as well. It should be simple to use/add this protocol. holger [1] https://github.com/zecke/gnu-smalltalk-debian/blob/master/packages/dbd-postgresql/Connection.st#L185 [2] https://www.postgresql.org/docs/9.1/static/libpq-exec.html
[Pharo-users] pharo50 vm update?
Good Morning, I am looking into migrating to Pharo5.0 and wonder what it would take get the performance fix from VMMaker.oscog-eem.1914.mcz into the stable Pharo50 VM? kind regards holger
[Pharo-users] Iceberg and git workflow
Hi, I am a heavy git user with languages like C, C++, Python, Ruby and even GNU Smalltalk and I hope iceberg will bring the same powerful experience to Pharo. Last Friday I started to add a bigger refactoring for a new feature to my software and didn't finish. Sadly today an issue in the code was found and I would like to fix this before fixing my code. I use this as opportunity to ask if Iceberg has some answers for that. With a non-Pharo project I would do: a.) If current HEAD is same as origin/master $ git stash (stash away my not finished changes) $ vi code.c fix.. $ git commit -a -c "subject long explanation of fix reference to bug" $ git stash apply (and go back to working on my feature) b.) E.g. if I finished n-commits but I am not fully done # store my work $ git commit -a -m "Work In Progress hack.." $ git checkout -b new-feature-branch # go back to master $ git checkout master $ git reset --hard origin/master (to restore) # work on the fix $ vi code.c fix.. $ git commit -a -c "fix..." # go back and continue on my fix $ git checkout new-feature-branch $ git rebase origin/master $ git reset HEAD^1 .. continue to work
[Pharo-users] Using a Unix filedescriptor in a FileStream?
Hi, I explored to use the Linux inotify API using UFFI. The inotify_init/inotify_init1 routine will me a Unix filedescriptor and I would like to a.) monitor it for being readable b.) read from it with a stream For a.) I have found the AioEventHandler and think I will be able to call >>#descriptor: directly and then can use it (still figuring out the API, probably just wait for the >>#changed: call). For b.) I thought I could use AttachableFileStream but that required a "fileId" but not a filedescriptor. Is there a way I can read from my fd using the standard stream API (otherwise I can try to use UFFI for read) regards holger
Re: [Pharo-users] Using a Unix filedescriptor in a FileStream?
> On 20 Sep 2016, at 20:38, Mariano Martinez Peck wrote: > > Hi Holger, Good Morning, thank you for your reply. > | reader | > reader := OSSAttachableFileStream name:'myStream' attachTo: aFileID writable: > false. > reader setNonBlocking "optional" the only issue is that i have a "int fd" and not a SQFile. In >>#name:attachToCFile:writable: you create a SQFile* out of a FILE* (OSSCFile) but judging the comment it doesn't work. The 32/64 bit issue can be solved by using FFIExternalStructure to model SQFile but the question if the VM was built with large file support on GNU/Linux is a tricky one. So maybe we create another primitive to convert a FILE* to a SQFile* (and have it manage the lifetime of that memory?)? And maybe another primitive to do the same for a Socket? > I still didn't understand why do you mean with a). What do you mean by > "monitor it from being readable" ? Imagine you want to exit the image in case the file /exit changes. You will charge inotify to watch this filepath and if the fd becomes readable you already know the answer, you don't have to read the event. But true if I have a Socket or FileStream I can do blocking read on it as well. > I think you could dig a bit in OSSPipe, OSSAttachableFileStream and their > usage. All classes have class comments, all methods are also documented, and > there is quite some documentation in [1]. Will look again but I didn't see anything obvious. E.g. primCreatePipe seems to already return two SQFile* ("fileId")? cheers holger
Re: [Pharo-users] Using a Unix filedescriptor in a FileStream?
> On 21 Sep 2016, at 15:09, Mariano Martinez Peck wrote: > > > > Exactly. I have been wanting this a couple of times while doing OSSubprocess. https://github.com/pharo-project/pharo-vm/pull/108. Would be nice if you could review it and give it a try. It adds two primitive (one to work on fd one to work on FILE). I probably also want to do: sqFile->isStdioStream = isatty(fileno(file)) > Yes, exactly. I remember now. And as I said, I also wanted to be able to work > at fd or FILE* level and I failed. Could you give the above a try and then I try to get it into the Opensmalltalk VM. holger
Re: [Pharo-users] Using a Unix filedescriptor in a FileStream?
> On 22 Sep 2016, at 23:33, Mariano Martinez Peck wrote: > > Hi Holger, Hey! > > I just run all OSSubprocess tests and they all worked! (tested in Pharo 5.0). > > I guess I will commit this on the dev branch and hopefully when this is > integrated into the VM I can merge that for my next OSSubprocess release. cool and thank you for trying it so quickly. I thought it make sense to mark non files the same as isStdioStream but that triggers a funny case: do { clearerr(file); if (fread(dst, 1, 1, file) == 1) { bytesRead += 1; if (dst[bytesRead-1] == '\n' || dst[bytesRead-1] == '\r') break; } } while (bytesRead <= 0 && ferror(file) && errno == EINTR); which means 0 or 1 char is read with >>#primRead:into:startingAt:count: and in case of inotify the event is lost (partially read and the rest discarded). I will have to start a discussion why primRead should be line buffered at all. Anyway. My good news is that: "an INotify" | arr fileID | self init: 8r4000. self prim_add_watch: '/tmp' flags: 16r0100. arr := ByteArray new: 4096. fileID := StandardFileStream new primFdOpen: fd writable: false. StandardFileStream new primRead: fileID into: arr startingAt: 1 count: 4096. arr has read a file notification event.. :)
Re: [Pharo-users] Using a Unix filedescriptor in a FileStream?
> On 23 Sep 2016, at 02:44, Ronie Salgado wrote: > > Hi Holger, Hi! > Currently the events given by this API are only holding a copy of the raw > inotify event in rawEvent and a copy of the raw path in rawPath. Further work > is required for translating the events into a more platform independent > interface, and for supporting OS X and Windows with this API. However, this > is already something that could be useful to you. if I see it correctly you start a poll loop (wait 100ms then call epoll wait)? Would you be interested to try to use AioEventHandler with the primitive I proposed? regards holger
Re: [Pharo-users] Script to migrate all mcz packages to git?
> On 02 Aug 2016, at 13:23, Peter Uhnak wrote: > >> Hi Peter, > At the time of the (post) writing it did preserve commit dates, but there was > no metadata-less yet. > > In any case, I don't see a reason why it shouldn't preserve commit dates with > metadata-less — it goes mcz by mcz and recommits it with given date, no? I have modified >>#basicStoreVersion: to pass the --date option to git and now date/time is preserved. I am copying multiple related packages into the same git repository and would like to "interleave" (based on date) the different packages. In your script you first have: filesSorted := fileBlocks asSortedCollection: sortBlock. files := (filesSorted collect: [ :x | x first ]) asArray. to select which files to actually copy (as Gofer allResolved has a lot more). but after the: goSource fetch. "downloads all mcz on your computer" I would like to mix the different packages based on their date. Would you know which Gofer(?) API I can use to get a version info (?) so I can access the date? holger
[Pharo-users] Speeding-up >>#instVarNamed: in Pharo-5.0 and beyond?
Hi, Magritte and my TagLengthValue (TLV) both use >>#instVarNamed:/>>#instVarNamed:put: to read and write from an object. I was just running >>#bench on my SMPP library and noticed that besides Spur Pharo5 is slower than Pharo3. I added this to PointerLayout: >># instVarIndexFor: aString ifAbsent: aBlockClosure | idx | idx := 1. slotScope do: [:each | each isVisible ifTrue: [ each name = aString ifTrue: [^idx]. idx := idx +1]]. ^aBlockClosure value and modified ClassDescription/TClassDescription to use it: "protocol: instance variables" instVarIndexFor: instVarName ifAbsent: aBlock "Answer the index of the named instance variable." | index | index := self classLayout instVarIndexFor: instVarName ifAbsent: [0]. index = 0 ifTrue: [^self superclass == nil ifTrue: [aBlock value] ifFalse: [self superclass instVarIndexFor: instVarName ifAbsent: aBlock]]. ^self superclass == nil ifTrue: [index] ifFalse: [index + self superclass instSize] The speed-up comes from filtering allSlots to allVisibleSlots (and creating an Array), then collecting the slot names and finally searching the name. Does it make sense to integrate such speed-ups? cheers holger
Re: [Pharo-users] Speeding-up >>#instVarNamed: in Pharo-5.0 and beyond?
> On 29 Sep 2016, at 10:25, Marcus Denker wrote: > > > > Slice committed: > > https://pharo.fogbugz.com/f/cases/19155/speedup-instVarNamed > > Marcus that was quick! I had locally modified to use 1 to: specLayout size do: [] and then use specLayout at: directly. It seemed to make a small difference as well So I was "lucky" to pick PointerLayout as class to put the method in? classLayout will always be an instance of PointerLayout (or its subclasses)? thank you holger PS: It luckily speeds up instVarNamed:put: as well :)
Re: [Pharo-users] Speeding-up >>#instVarNamed: in Pharo-5.0 and beyond?
> On 29 Sep 2016, at 12:28, Denis Kudriashov wrote: > > Cool. > > What the percentage of speedup? Random micro benchmark... | m | m := Morph new. [ | var | var := m instVarNamed: #submorphs ] bench From: 983,066 per second To: 1,302,098 per second
[Pharo-users] GitFileTree-MergeDriver and using it with git mergetool
Good Morning, Thierry Goubier has merged a change to use the Pharo "merge" as a tool for "git mergetool" and in this mail I explain how to configure it, give a small example and explain my usecases for it. I am not sure the merge is fully working and would like people to test and review (code and merge results). Configuration: edit ~/.gitconfig and put something like: [mergetool "mcmerge"] cmd = /path/to/GitFileTree-MergeDriver/merge --mergetool $BASE $LOCAL $REMOTE $MERGED Usage: $ git mergetool --tool=mcmerge Use case: When using git merge to merge n branches, the GitFileTree-MergeDriver will be consulted to merge the configured files. The same tools are not used when git cherry-pick and git rebase fail to merge. In these case a manual call of "git mergetool" is required. The following cases might be something you encounter. If you maintain a stable branch but want to backport a specific bugfix from a master branch you might use "git cherry-pick " and end with a merge conflict on the metadata. Using git mergetool --tool=mcmerge will help you to resolve it. You try to contribute to a project that is using git for hosting and have either been asked to modify your commits or the master version has been updated while your code was under review. In this case you would issue "git rebase origin/master". Metadata merge conflicts can be resolved with git mergetool. For work I was creating a bugfix but when thinking of deployment I noticed that I need to deploy in two phases. Roll-out the initial part of the fix and once it runs everywhere start rolling out the actual bugfix. What I did was splitting my bugfix branch in two, merge the first part into repository. After it has been merged other changes were made before the update was deployed. To continue to work on the second part of the bugfix I rebased it and have used git mergetool to merge the metadata. cheers holger
[Pharo-users] Debugger and stepping over a function that will have a DNU
Hi, before I try to reproduce this exactly I wondered if the following is a known issue with Pharo5. If I am in the debugger and try to step over a message send and that would generate a DNU, Pharo starts taking 99% cpu time. If I use CMD+./CTRL+. to interrupt it doesn't really work either. I get it to repaint the screen once and see a lot of message boxes with errors but the system remains unresponsive. Is that known? holger
Re: [Pharo-users] Debugger and stepping over a function that will have a DNU
> On 05 Dec 2016, at 13:03, Denis Kudriashov wrote: > > Dear Denis, > It was fixed here 16877 and here 19108. (last allows interrupt in such cases) is this in Pharo5 or will it show up in Pharo5? thank you holger
[Pharo-users] DNU on materializing a fueled out exception
Hi, I showed Pharo to a friend and wanted to show the nice feature of fueling out an exception and then using FLMaterializer class>>#materializeFromFileNamed: to load it back and get a debugger up. In Pharo5 I am presented a DNU instead. The DNU is on GTGenericStackDebugger as it doesn't understand the message Fuel is sending. What to fix, Fuel to use the new protocol or GTGenericStackDebugger to honor the old protocol? FueldOutStackDebugAction>>#serializeTestFailureContext: aContext toFileNamed: aFilename | serializer | serializer := FLSerializer newDefault. self encodeDebugInformationOn: serializer. serializer addPostMaterializationAction: [ :materialization | Smalltalk tools debugger openOn: Processor activeProcess context: materialization root label: 'External stack' contents: nil fullView: false ]. So it looks like now it should create a debug session first and then pass it to the debugger? I think loading new fuel in Pharo3.x is still possible so maybe it is best to re-add that protocol? comments? holger
Re: [Pharo-users] DNU on materializing a fueled out exception
> On 19 Dec 2016, at 08:53, Holger Freyther wrote: Good Morning Everyone, > So it looks like now it should create a debug session first and then pass it > to the debugger? I think loading new fuel in Pharo3.x is still possible so > maybe it is best to re-add that protocol? I hope all of you had a nice break and look forward to 2017. I understand that in the future (Pharo6 or beyond) there are conceptually better ways to achieve what was working in Pharo3 (and maybe before) but right now something that worked stopped to work and from my point of view such a regression should be fixed. I am not asking someone to fix what I think is important but I am still struggling to grasp the process of getting a bugfix into Pharo5. Maybe someone can help to line it out? * I pick the approach Max suggested and put it into FLPlatform to open a debugger for a context * Put it into the Pharo50Inbox? * Create a ticket? * Make a slice? holger
Re: [Pharo-users] DNU on materializing a fueled out exception
> On 28 Dec 2016, at 13:44, Mariano Martinez Peck wrote: > > I am not asking someone to fix what I think is important but I am still > struggling to grasp the process of getting a bugfix into Pharo5. > > > But are you sure there will be a bugfix release of 5.0? Because I am not > sure about that. I have committed a slice into the Pharo5 inbox but I don't see the monkey(?) testing the change or the slice being mentioned in the ticket. Do I have to do anything extra? thanks holger Links to tickets, the slice and the individual monticello commits https://pharo.fogbugz.com/f/cases/19477/Fuel-out-Stack-uses-old-debugger-API-in-Pharo-5 http://smalltalkhub.com/#!/~Pharo/Pharo50Inbox/versions/SLICE-Issue-19477-Fuel-out-Stack-uses-old-debugger-API-in-Pharo-5-HolgerHansPeterFreyther.1 http://smalltalkhub.com/#!/~Pharo/Pharo50Inbox/versions/FuelPlatform-HolgerHansPeterFreyther.63 http://smalltalkhub.com/#!/~Pharo/Pharo50Inbox/versions/FuelTools-Debugger-HolgerHansPeterFreyther.11
Re: [Pharo-users] Which is the best way of opening a PDF and HTML local files from Pharo
> On 26 Feb 2017, at 12:27, David T. Lewis wrote: > Hi, > I tried "OSProcess command: 'xdg-open ', pdfPathString" in Pharo, and it works > for me. Maybe check to be sure that pdfPathString is right? if you don't control pdfPathStrig you will also need to escape this string. I don't know if Pharo/OSProcess has support for the necessary escaping. E.g. 'foo.pdf; rm -rf /' as input should not lead to the removal of your system. :) holger
[Pharo-users] Request for help: Please upload test results of your foss projects
Good Afternoon! one thing I was missing with travis-ci is a graph for the executed unit tests. In Jenkins there is the Junit plugin that will search for xml files and then provide a table view and a graph. I found this quite useful and it helped me to see that I once accidentally disabled the execution of many tests. For modern CI systems there doesn't seem to be such a feature and I started to scratch my own itch and have built one. The backend is built using Pharo5, a python client for the upload and some very basic frontend code using reactjs. The name of the service is "bob-bench.org" and I would be very happy if you could modify your .travis.yml to upload test results to it. The current main feature is to provide a simple "badge" that lists the number of tests (or failures/errors ) and can be put into the README.md next to your build status. More features are planned but if you have ideas feel free to drop me an email. Documentation: http://benchupload.readthedocs.io Example usage: https://github.com/moiji-mobile/smsc Project overview: https://bob-bench.org/r/gh/moiji-mobile/smsc looking forward to more uploads holger
Re: [Pharo-users] Request for help: Please upload test results of your foss projects
> On 26 Feb 2017, at 16:37, Holger Freyther wrote: > > > Documentation:http://benchupload.readthedocs.io > Example usage:https://github.com/moiji-mobile/smsc > Project overview: https://bob-bench.org/r/gh/moiji-mobile/smsc Configure badge:https://bob-bench.org (specify project name, branch and them markdown for your README.md)
Re: [Pharo-users] Request for help: Please upload test results of your foss projects
> On 27 Feb 2017, at 04:38, Cyril Ferlicot D. wrote: > > > Thank you! > > I have one problem, with the generated text area disabled I cannot copy > the generated badge without playing with the HTML of the page. Please re-load and see if it is better? The textarea is now readOnly but not disabled. holger
Re: [Pharo-users] Request for help: Please upload test results of your foss projects
> On 26 Feb 2017, at 23:37, Serge Stinckwich wrote: > > Nice job Holger ! > I try and I have some problems : > https://travis-ci.org/PolyMathOrg/PolyMath/jobs/205545068 Sorry about it. It seems with the containerized infrastructure I can not install binaries. I have created a pull request to use "pip install --user benchupload" and updated the documentation. The Pharo6 failure in #testMaximumIterationsProbabilistic seems to be unrelated? thank you holger
[Pharo-users] Please commit/fix GAMysqlBinReader>>#timeStampFrom:
Hi, on Pharo5 and "latest"(?) stable release of Garage it is implemented as: timeStampFrom: aStream "ByteStream" | dt | dt := self dateTimeFrom: aStream. ^ dt ifNil: [nil] ifNotNil: [dt asTimeStamp] There is no implementor of asTimeStamp. From what I read is a TimeStamp is a reduced range and asTimeStamp can be omitted? thank you holger
Re: [Pharo-users] [Pharo-dev] What is the craziest bug you ever face
> On 9 Mar 2017, at 12:36, Stephane Ducasse wrote: > > Hi guys > > During the DSU workshop we were brainstorming about what are the most > difficult bugs we faced and what are the conceptual tools that would have > helped you. Tracking down a problem where a header file was like this struct touch_screen_event { #ifdef SOME_FLAG ... other fields #endif int x; int y; int pressure; }; The touchscreen library was compile with -DSOME_FLAG but the code using that library didn't have the flag set. This means the code using the touchscreen events read x/y from the wrong offset in memory. The example work of the touchscreen library worked while the real user didn't. It would have helped to embed struct sizes and offsets into the shared library to find differences at link time. --- Keyboard handling in kdrive (an Xserver for embedded/mobile usage): After plugging/unplugging USB into the device, the keyboard started to generate wrong keycodes. In Linux (depending on your keyboard mode) every key event is represented as a byte(?). This worked for a long time but then keyboards started to have more keys. So a special key value is used to indicate that a multi byte sequence will follow. As it turns out plugging/unplugging generated a multi-byte keyboard event... Not sure what would have helped? :)
Re: [Pharo-users] snap package can't find vm-display-X11
> On 12 Mar 2017, at 19:32, Alistair Grant wrote: > > > $ ldd /snap/pharo/x1/usr/bin/pharo-vm//vm-display-X11.so > not a dynamic executable your host doesn't have the 32bit libc library/dynamic linker? What does file say on the file? I never used snap but is the process running in the same environment? Is $DISPLAY set from the process point of view? Can it connect to the display socket? holger
[Pharo-users] Pharo6 server deployment and no home directory
Hi, as Pharo6 is around the corner I have moved my CI build from tracking Pharo5 to Pharo6 but I run into a problem. Either if $HOME is not set at all or $HOME points to a wrong directory I run into the error below. As this is a server application and I run multiple VMs with the same image and there is no home directory I would prefer that no information is persisted at all. Can this be done? Have there been any changes in Pharo6 in regard to this? It also seems to change from Fuel->Ston for the identifier? Is this intended? holger HOME=/home/blabla ./vm/pharo --nodisplay My.image eval --save '(NonInteractiveTranscript onFileNamed: #stdout)' install PrimitiveFailed: primitive #createDirectory: in UnixStore failed UnixStore(Object)>>primitiveFailed: UnixStore(Object)>>primitiveFailed UnixStore(DiskStore)>>createDirectory: UnixStore(FileSystemStore)>>ensureCreateDirectory: UnixStore(FileSystemStore)>>ensureCreateDirectory: UnixStore(FileSystemStore)>>ensureCreateDirectory: FileSystem>>ensureCreateDirectory: FileReference>>ensureCreateDirectory FileLocator(AbstractFileReference)>>ensureCreateDirectory GlobalIdentifierFuelPersistence(GlobalIdentifierPersistence)>>ensureDirectory GlobalIdentifierFuelPersistence(GlobalIdentifierPersistence)>>save: GlobalIdentifierFuelPersistence(GlobalIdentifierPersistence)>>load: GlobalIdentifierFuelPersistence(GlobalIdentifierPersistence)>>ensure: GlobalIdentifierStonPersistence(GlobalIdentifierPersistence)>>ensure: GlobalIdentifier>>ensure GlobalIdentifier class>>initializeUniqueInstance GlobalIdentifier class>>uniqueInstance SystemSettingsPersistence class>>resumeSystemSettings [ :persistence | persistence resumeSystemSettings ] in PharoCommandLineHandler>>runPreferences in Block: [ :persistence | persistence resumeSystemSettings ...etc... BlockClosure>>cull: SystemDictionary(Dictionary)>>at:ifPresent: SmalltalkImage>>at:ifPresent: PharoCommandLineHandler>>runPreferences PharoCommandLineHandler>>activate PharoCommandLineHandler class(CommandLineHandler class)>>activateWith: [ super activateWith: aCommandLine ] in PharoCommandLineHandler class>>activateWith: in Block: [ super activateWith: aCommandLine ] NonInteractiveUIManager(UIManager)>>defer: PharoCommandLineHandler class>>activateWith: [ aCommandLinehandler activateWith: commandLine ] in BasicCommandLineHandler>>activateSubCommand: in Block: [ aCommandLinehandler activateWith: commandLine ] BlockClosure>>on:do:
Re: [Pharo-users] Pharo6 server deployment and no home directory
> On 15. Apr 2017, at 00:23, Juraj Kubelka wrote: > > Hi, Hey! >> As this is a server application and I run multiple VMs with the same image >> and there is no home directory I would prefer that no information is >> persisted at all. Can this be done? Have there been any changes in Pharo6 in >> regard to this? > > Do you think that checking if `FileIdentifier home` exist solves the issue? > > Can we detect headless state? > >> >> It also seems to change from Fuel->Ston for the identifier? Is this intended? > > Yes, this is intended and should not produce problems. Thank you for your quick reply. What I find odd is that this error seems to be coming from within: GlobalIdentifierStonPersistence(GlobalIdentifierPersistence)>>ensure: self shouldCallPreviousPersistence ifTrue: [ previousPersistence ensure: existingDictionary ]. So there is the "Fuel" Persistence and we want to migrate things. Fair enough but in the migration we do have: >>load: existingDictionary "It loads stored information into existingDictionary." self preferences exists ifFalse: [ "This is a new computer, so we define new computer UUID. User still has to agree about sending data if it is not has been done yet." ^ self save: existingDictionary ]. So self preferences exists is true and now the old (non-existent data?) is being saved and we crash and exit on save. * Why isn't the migration from Ston to Fuel more explicit? * Why is "load" trying to save? * Why is "ensure:" used instead of load? * Not sure why "self preferences exists" seems to end in true? have a nice weekend holger >> FileLocator(AbstractFileReference)>>ensureCreateDirectory >> GlobalIdentifierFuelPersistence(GlobalIdentifierPersistence)>>ensureDirectory >> GlobalIdentifierFuelPersistence(GlobalIdentifierPersistence)>>save: >> GlobalIdentifierFuelPersistence(GlobalIdentifierPersistence)>>load: >> GlobalIdentifierFuelPersistence(GlobalIdentifierPersistence)>>ensure: >> GlobalIdentifierStonPersistence(GlobalIdentifierPersistence)>>ensure: >> GlobalIdentifier>>ensure
Re: [Pharo-users] Pharo6 server deployment and no home directory
> On 15. Apr 2017, at 07:59, Holger Freyther wrote: > > > * Why isn't the migration from Ston to Fuel more explicit? > * Why is "load" trying to save? > * Why is "ensure:" used instead of load? > * Not sure why "self preferences exists" seems to end in true? Is there an option to not run startUp options at all or single step through them? I have the suspicion that "exists" returns true while it should not. Will see how to move forward. holger Breakpoint 4, dir_EntryLookup (pathString=0x84cd258 "/home/build/.config/pharo", pathStringLength=25, nameString=0x84ce920 "org.pharo.gt.spotter.event.recorder.fuel", nameStringLength=40, name=0xfffcb1bc "", nameLength=0xfffcb1ac, creationDate=0xfffcb1a4, modificationDate=0xfffcb1b0, isDirectory=0xfffcb1a8, sizeIfFile=0xfffcb198, posixPermissions=0xfffcb1b4, isSymlink=0xfffcb1b8) at /home/travis/build/pharo-project/pharo-vm/opensmalltalk-vm/platforms/unix/plugins/FilePlugin/sqUnixFile.c:270 270 in /home/travis/build/pharo-project/pharo-vm/opensmalltalk-vm/platforms/unix/plugins/FilePlugin/sqUnixFile.c (gdb) p printCallStack() 0xfffd41a0 M UnixStore(DiskStore)>basicEntryAt: 0x842dd28: a(n) UnixStore 0xfffd41c0 M UnixStore(DiskStore)>nodeAt:ifPresent:ifAbsent: 0x842dd28: a(n) UnixStore 0xfffd41e4 M UnixStore(FileSystemStore)>exists: 0x842dd28: a(n) UnixStore 0xfffd4200 M FileSystem>exists: 0x842dd38: a(n) FileSystem 0xfffd421c M FileReference>exists 0x84c9d78: a(n) FileReference 0xfffd4234 M FileLocator(AbstractFileReference)>exists 0x84c32c8: a(n) FileLocator 0xfffd4254 I GlobalIdentifierFuelPersistence(GlobalIdentifierPersistence)>load: 0x84a6d40: a(n) GlobalIdentifierFuelPersistence 0xfffd4278 I GlobalIdentifierFuelPersistence(GlobalIdentifierPersistence)>ensure: 0x84a6d40: a(n) GlobalIdentifierFuelPersistence 0xfffd429c I GlobalIdentifierStonPersistence(GlobalIdentifierPersistence)>ensure: 0x84a6d30: a(n) GlobalIdentifierStonPersistence 0xfffd42c0 I GlobalIdentifier>ensure 0x84a6510: a(n) GlobalIdentifier 0xfffd42e0 I GlobalIdentifier class>initializeUniqueInstance 0x9a2def0: a(n) GlobalIdentifier class 0xfffd4300 I GlobalIdentifier class>uniqueInstance 0x9a2def0: a(n) GlobalIdentifier class 0xfffccfd0 I SystemSettingsPersistence class>resumeSystemSettings 0x9a2d9f0: a(n) SystemSettingsPersistence class
[Pharo-users] Pharo6 ombu-session files in pharo-local
Hi, as part of moving a test image from Pharo5 to Pharo6 I also noticed that on each start a new "ombu-session" folder will be created (and never cleaned up). For a server-side deployment this is quite unfortunate as I want/need to use a fixed amount of disk space. I think Norbert has similar concerns with this feature. We are using "image-launch" to run the image and then either systemd|monit|runit|kubernetes will execute (and re-execute) image-launch. So potentially many many folders are created and never used. On deployments with many images, it is rare to make online changes and even more rare to try to recover them. Would it be possible to create the underlying storage or disable it completely? thank you holger
Re: [Pharo-users] Voyage - collecting data from Mongo
> On 28. Apr 2017, at 14:27, Mark Rizun wrote: > > Hi, Hi! > Is it possible to retrieve data from Mongo collection if it was not created > via Voyage? > Meaning that I do not have a class in Pharo that would correspond to said > collection (should I implement one?). Yes. But you probably need to customize a bit. 1.) Make sure you refer to the right collection... descriptionContainer ^VOMongoContainer new collectionName: 'yourExistingCollectionName' yourself 2.) For materialization.. you should at least have _id (okay every entry does that) VOMongoSerializer fieldVersion (#version on my old Voyage) VOMongoSerializer fieldType (#instanceOf) and then things should work out. You can probably add #version and #instanceOf to your existing objects? holger
Re: [Pharo-users] Voyage - collecting data from Mongo
> On 30. Apr 2017, at 10:47, Mark Rizun wrote: > > Hi, Hi! > Holger, thank you, I will try your suggestion. > However, I use MongoDB 3.4, and I think that Voyage has support for only > versions under 3.0. Am I right? oh? Is that written somewhere in the Voyage documentation? The MongoDB wire protocol is upwards compatible, so while one might not use the new features yet, your application will work. holger
Re: [Pharo-users] Voyage - collecting data from Mongo
> On 30. Apr 2017, at 11:18, Mark Rizun wrote: > > > > Sorry, I wasn't very accurate in previous email. > Actually, my friend is using MongoTalk/Voyage, and she encountered a problem > that it is not possible to work with databases created in 3.0+ versions of > MongoDB. > Probably this issue is related to new storage types (connection part is fine). I am running a Pharo3.0 image (with an older Voyage) against MongoDB 3.2. I think I am already using "WiredTiger" as database. If you can reproduce it, we can have a look. The only issue I have seen is Sabines authentication failure with more "modern" schemes. holger
[Pharo-users] Resolving DNS in-image
Norbert and me looked at using DNS for service discovery and ran into some of the limitations of the NetNameResolver[1]. In the end I created an initial DNS implementation in Pharo called Paleo-DNS[2] to overcome these. DNS is a protocol we use every day but rarely think of. There is an active IETF community that is evolving the protocol and finding new usages (service discovery is one of them). In DNS there are different types of resource records (RR). The most commonly used ones in a client ("stub") are "A" for IPv4 addresses, "" for IPv6 addresses, "CNAME" for aliases, "SRV" records. So far only support for "A" records was implemented. So if you are curious about DNS then this is a great opportunity to add your favorite RR implementation to it and send a PR. There are probably 20+ of them to go. ;) Query example using DNS-over-TLS (DoT) to Google Public DNS PaleoDNSTLSTransport new destAddress: #[8 8 4 4] port: 853; timeout: 2 seconds; query: (PaleoDNSQuery new transactionId: (SharedRandom globalGenerator nextInt: 65535); addQuestion: (PaleoRRA new rr_name: 'pharo.org.'); addAdditional: (PaleoRROpt new udpPayloadSize: 4096)) [1] It's blocking on Unix, on Mac only one look-up may occur at a time and it returns exactly one address. There is also no IPv6 support. [2] https://github.com/zecke/paleo-dns
Re: [Pharo-users] Resolving DNS in-image
> On 28. Mar 2019, at 08:02, Sven Van Caekenberghe wrote: > > Hi Holger & Norbert, > great. Regardless of how many versions exist. We should get one into the image with proper platform integration. I wasn't aware of your code but I assumed it is something you could write, hence the Paleo prefix. Now that the Paleo code is the Neo one is more funny... > NeoSimplifiedDNSClient default addressForName: 'pharo.org'. "104.28.27.35" > > One of my goals was to use it as a more reliable, non-blocking 'do we have > internet access' test: > > NeoNetworkState default hasInternetConnection. "true" What is internet access and how would this be used? Is this about captive portals? With local network policy the big anycast services might be blocked but the user can still reach services. Or with deployed microservices they might reach other but not the outside? ... snip ... > The main problems are concurrent and asynchronous requests, as well as error > handling. > > I would be great if we could do this well in-image. (But getting OS DNS > settings is hard too). > > We should talk ;-) Agreed that getting the OS DNS settings is hard but not impossible (go seems to get away with its implementation on unix). It seems we manage to honor platform settings for http proxies and I am confident we can do it for DNS as well. I planned to solve concurrency by creating one stub resolver per request and having a shared cache. The internet can be a hostile place and DNS is an easy victim. Cache poisoning does exist and random source port, 0x20 randomization, random transaction ids, disrespecting PTMU ICMP messages are the few mitigations we have. Let's definitely talk. I hang out in the pharo discord group. :) holger
Re: [Pharo-users] Resolving DNS in-image
> On 29. Mar 2019, at 10:07, Sven Van Caekenberghe wrote: > > Holger, Sven, All! Thanks for moving it to GitHub! Pharo Days: I am in APAC right now and I am not sure if I make it. I am hesitating. Maybe we can have a Google Hangout to discuss this (if not too inconvenient for the ones present?). Unix system resolver config discovery: The FreeBSD manpages are quite good. I think we need to parse resolv.conf, hosts and nsswitch (Linux, FreeBSD). It's probably okay to not support everything initially (e.g. I have never seen sortlist being used in my unix career). Also the timeouts for re-reading these file are interesting (inotify/stat/lazily reread might be preferable). https://www.freebsd.org/cgi/man.cgi?resolv.conf https://www.freebsd.org/cgi/man.cgi?hosts https://www.freebsd.org/cgi/man.cgi?query=nsswitch.conf Windows resolver config discovery: It seems https://docs.microsoft.com/en-us/windows/desktop/api/iphlpapi/nf-iphlpapi-getnetworkparams populates a FIXED_INFO that includes a list of resolver addresses. MacOS config discovery: Starting with the Unix implementation might not be terrible. My interest: I would like Pharo to improve on the networking side and I have worked with recursive resolvers and authoritative servers in my last job. It seemed obvious to combine these two when Norbert tried NetNameResolver and only got one IPv4 address and I looked at the C implementation. The other interest is that I am following the IETF DNS development (on dnsop/dprive/doh with interesting topics). I think having a manageable DNS toolkit will help me to play with specs/standards in the future. More responses inline. >> What is internet access and how would this be used? Is this about captive >> portals? With local network policy the big anycast services might be blocked >> but the user can still reach services. Or with deployed microservices they >> might reach other but not the outside? > > For years there is this issue in Pharo that if we build features that require > internet access (say for example automatic loading of the Catalog when you > start using Spotter, but there are many more places where this could add lots > of value), that people say "don't do this, because it won't work when I have > slow or no internet (like on a train)". This sounds like "bearer management"? It seems like consulting the OS for the network status might be better/more consistent? > The core cause of the problems is that the current NameResolver is totally > blocking, especially in failure cases, which gives a terrible experience. Yes. That's horrible. The MacOS implementation is actually asynchronous but has a level of concurrency of one. :( > One way to fix this would be with the concept of NetworkState, a cheap, > reliable, totally non-blocking way to test if the image has a working > internet connection. Related is the option of 'Airplane Mode', so that you > can manually say: "consider the internet unreachable". Makes sense but is difficult as well. Just because we can't resolve one name doesn't mean that NetNameResolver won't lock-up soon after. :( I think we have to come up with ways to deal with just because all I/O is blocking in a Pharo Process doesn't mean that there is no concurrency. Is this only true for files+dns? In the bigger context I would like to have something like CSP in Pharo. > I would *very* much prefer not to depend on any obscure, hard to maintain VM > code (FFI would just be acceptable). ack. > What I tried/implemented in NeoDNSClient (which inherits from the one-shot > NeoSimplifiedDNSClient) is a requestQueue and a cache (respecting ttl), where > clients put a request on the requestQueue and wait on a semaphore inside the > request (respecting their call's timeout). A single process (that > starts/stops as needed) handles the sending & receiving of the actual > protocol, signalling the matching request's semaphore. (The #beThreadSafe > option needs a bit more work though). In my implementation I have separated the transports in their own classes. For UDP we always want to have a fresh socket to get a new source port assigned, for TCP, TLS and DoH it might make sense to keep the connection open a bit. In some ways if I open 15 db connections with Voyage, I'm not concerned about 15 DNS queries. The implementation will be a lot more simple (no synchronization, no need to reason about concurrency) but on the other hand coordination is what we have today. I think we can achieve coordination with an easier way. E.g. register pending requests and allow other clients to subscribe on the result. > I am curious though, what was your initial motivation for starting PaleoDNS ? > Which concrete issues did you encounter that you wanted to fix ? What I like and found with my implementation: * It would be nice if ZdcAbstractSocketStream understood uintX/uintX: * My record classes can be parsed and serialized. In gener
[Pharo-users] Google Protobuf - Small update and parsing a tensorflow GraphDef
A short update on the current state of my implementation. I heard there is interest to use it to parse tensor flow models. 0.) Load the BaselineOfProtobuf from https://github.com/zecke/pharo-protobuf 1.) Generate code Use the Google protoc to generate a descriptor set: $ protoc -o tf.pb --include_imports tensorflow/core/framework/graph.proto And then use this descriptor to generate code: | descriptor nameTable generator | descriptor := GPBFileDescriptorSet materializeFrom: 'tf.pb' asFileReference binaryReadStream. nameTable := GPBTypeNamesVisitor new. nameTable customPrefix: 'TF_'. generator := GPBGeneratingVisitor new typeNames: nameTable; targetPackage: 'Tensorflow-Definitions'. descriptor visit: nameTable. generator visit: descriptor. 2.) Parse a model (e.g. the inception v3 model) TF_GraphDef materializeFrom: 'inception_v3_2016_08_28_frozen.pb' asFileReference binaryReadStream 3.) ??? I guess load it into tensorflow? I am not sure if the endianness for the Float is correct (if not the weights are wrong so please be careful and have a look). There are still plenty of TODOs left. JSON and TextProto parsing needs to be implemented. Working on the official regression suite is needed as well. Strict/Non-strict modes for parsing are needed as well.
[Pharo-users] FFI and when to convert the stack pointer (an Alien) into an ExternalAddress
I am toying with Ben's bindings to clang-c (C interface to the Clang C-indexer API). One of the main features of the C API is a way to visit the translation unit. This is implemented as a callback into Smalltalk with with "cursors" providing indirect access and simplified access to the node of the AST. The return result indicates how to continue the traversal (continue, stop, recurse). FFICallback signature: #( CXChildVisitResult ( CXCursor cursor, CXCursor parent, CXString client_data)) block: [ :cursor :parent :clientData | cursor spelling. <- booom but no crash aCXChildVisitResult value "@1" ]. "cursor" will be an Instance of CXCursor and is passed by value and the C struct size is >16 bytes (on Unix this is passed on the stack). The resulting CXCursor has a handle that is invalid to perform any call-out with. CXCursor getHandle FFIExternalStructureReferenceHandle handle: an Alien (isPointer = true, the stack ptr offset: 0 ExternalStructure defines valid types as ByteArray (the data) or ExternalAddress (the pointer). FFIExternalStructureReferenceHandle is missing there and I will create a separate thread/ticket about it. So somewhere from: FFICallback>>#valueWithContext:sp: ... FFICallbackArg>>#extractStructType: FFIExternalType>>#handle:at: FFIExternalStructureType>>#basicHandle:at: ee will need to convert the handle + offset (one based) to an ExternalAddress. Now the protocol of >>#handle:at: and >>#integerAt:, >>#pointerAt: indicate that we want to carry address+offset into late of the conversion. And I can't point my finger into where it should be done. I have created: Alien>>#referenceStructAt: byteOffset length: length ^self isPointer ifFalse: [super referenceStructAt: byteOffset length: length ] ifTrue: [ (ExternalAddress fromAddress: self addressField) + (byteOffset - 1)] This "fixes" it. But that is at the last point in the conversion. It shadows another problem as well. So in fact maybe changing: FFICallbackArgumentReader>>#extractPointerOfType: and >>#extractStructType: to read: ... pair := self nextBaseAddressForStructure: type. baseAddressToRead := ExternalAddress fromAddress: pair first address. <- modified offsetOfBaseAddress := pair second. ... is this the right way forward? We convert the stack to an ExternalAddress early on and use it. With the above change to FFICallbackArgumentReader the CXCursor from the stack will read as: CXCursor getHandle FFIExternalStructureReferenceHandle handle: ExternalAddress offset: 0 I am still not able to push this cursor to the stack. But that is for another mail. WDUT? holger
Re: [Pharo-users] Pharo on OpenSUSE (FFI / libgit2 errors)
Hi, I just ran into the very same problem and into further problems when building from source. I will try to see which help I can provide to come to a solution: libgit2 requires libssh2 which requires OpenSSL 1.0.x which is not installed/installable. Building fails as OpenSUSE puts amd64 libraries into a lib64 folder. Manually copying it around fixes the build. I am using OpenSUSE tumbleweed right now (a rolling release Linux distribution). holger > On 11. Jan 2020, at 17:47, Jan Blizničenko > wrote: > > Hello > > I would like to use Pharo on OpenSUSE, which is only Linux distro on our > university PCs, however, I am getting FFI to libgit2 related errors just > about everywhere. By starting Launcher itself, by starting an image (Pharo 7 > and 8) and fetching probably any repository (tried Roassal 2 on Pharo 6, 7, > 8). I would really like to be able to use Pharo on university PCs, but I need > you help with finding out what might be wrong and what to do about it. The > main and only instruction for Linux is currently "Unzip the archive in a > place where you have write privileges.", which obviously does not work for > OpenSUSE. It works fine for Ubuntu and Debian. > > Thank you > > Jan > >
Re: [Pharo-users] Pharo on OpenSUSE (FFI / libgit2 errors)
It's exclusively due curl-gnutls (libcurl linked against GNUtls instead of OpenSSL for license preference). A version of OpenSSL1.0.0 seems to be in the bundle. This was tested against http://files.pharo.org/get-files/80/pharo64-linux-stable.zip 1.) Download that file 2.) unzip that file somehwere 3.) In the same directory execute the below script. It fetches gnutls-curl from Ubuntu and its dependencies. 4.) Please report if that worked for you or not. #!/usr/bin/env bash # Fetches Ubuntu Xenial dependencies and copy them into the Pharo directory set -ex add_dependencies() { local package="$1" local url="http://mirrors.kernel.org/ubuntu/pool/${package}"; wget -O tmp.deb ${url} # TODO(zecke): This avoids using dpkg-deb but assumes a third genration ar x tmp.deb data.tar.xz # TODO(zecke): This assumes multi-arch packaging. It's true now. tar -xv --strip-components=3 -C lib/pharo/5.0-201902062351 -f data.tar.xz ./lib/x86_64-linux-gnu || \ tar -xv --strip-components=4 -C lib/pharo/5.0-201902062351 -f data.tar.xz ./usr/lib/x86_64-linux-gnu # clean-up rm tmp.deb data.tar.xz } add_dependencies main/c/curl/libcurl3-gnutls_7.47.0-1ubuntu2.14_amd64.deb add_dependencies main/r/rtmpdump/librtmp1_2.4+20151223.gitfa8646d-1ubuntu0.1_amd64.deb add_dependencies main/libi/libidn/libidn11_1.32-3ubuntu1.2_amd64.deb add_dependencies main/n/nettle/libnettle6_3.2-1ubuntu0.16.04.1_amd64.deb add_dependencies main/n/nettle/libhogweed4_3.2-1ubuntu0.16.04.1_amd64.deb add_dependencies main/n/nas/libaudio2_1.9.4-4_amd64.deb > On 11. Jan 2020, at 18:24, Holger Freyther wrote: > > Hi, > > I just ran into the very same problem and into further problems when building > from source. I will try to see which help I can provide to come to a solution: > > libgit2 requires libssh2 which requires OpenSSL 1.0.x which is not > installed/installable. > > Building fails as OpenSUSE puts amd64 libraries into a lib64 folder. Manually > copying it around fixes the build. > > I am using OpenSUSE tumbleweed right now (a rolling release Linux > distribution). > > holger > > >> On 11. Jan 2020, at 17:47, Jan Blizničenko >> wrote: >> >> Hello >> >> I would like to use Pharo on OpenSUSE, which is only Linux distro on our >> university PCs, however, I am getting FFI to libgit2 related errors just >> about everywhere. By starting Launcher itself, by starting an image (Pharo 7 >> and 8) and fetching probably any repository (tried Roassal 2 on Pharo 6, 7, >> 8). I would really like to be able to use Pharo on university PCs, but I >> need you help with finding out what might be wrong and what to do about it. >> The main and only instruction for Linux is currently "Unzip the archive in a >> place where you have write privileges.", which obviously does not work for >> OpenSUSE. It works fine for Ubuntu and Debian. >> >> Thank you >> >> Jan >> >> >
[Pharo-users] Voluntarily cancelling requests ("applying an expiration date")
tl;dr: I am searching for a pattern (later code) to apply expiration to operations. Introduction: One nice aspect of Mongodb is that it has built-in data distribution[1] and configurable retention[2]. The upstream project has a document called "Server Discovery and Monitoring (SDAM)", defining how a client should behave. Martin Dias is currently implementing SDAM in MongoTalk/Voyage and I took it on a test drive. Behavior: My software stack is using Zinc, Zinc-REST, Voyage and Mongo. When a new REST requests arrives I am using Voyage (e.g. >>#selectOne:) which will use MongoTalk. The MongoTalk code needs to select the right server. It's currently done by waiting for a result. Next I started to simulate database outages. The rest clients retried when not receiving a result within two seconds (no back-off/jitter). What happened was roughly the following: [ 1.) ZnServer accepts a new connection 2.) MongoTalk waits for a server longer than 2s "nothing.. the above waits..." ] repeat. Problem: What happened next surprised me. I expected to have a bad time when the database recovers and all the stale (remember the REST clients already gave up and closed the socket) requests will be answered. Instead my image crashed early in my test as the ExternalSemaphoreTable was full. Let's focus on the timeout behavior and discuss the existence of the ExternalSemaphoreTable and the number of entries separately at a different time. To me the two main problems I see are: 1.) Lack of back-pressure for ZnManagingMultiThreadedServer 2.) Disconnect of time between the Application Layer handling REST is allowed to take and down the stack how long MongoTalk may sleep and wait for a server. The first item is difficult. Even answering HTTP 500 when we are out of space in the ExternalSemaphore is difficult... Let's ignore this for now as well. What I look for: 1.) Voluntarily Timeout Inside my Application code I would like to tag an operation with a timeout. This means everything that is done should complete within X seconds. It can be used on a voluntarily basis. >>#lookupPerson "We expect all database operations to complete within two seconds" person := ComputeContext current withTimeout: 2 seconds during: [ repository selectOne: Person where: [:each name | ...], ]. MongoTalk>>stuff "See if the outer context timeout has expired and signal. E.g. before writing something into the socket to keep consistency." ComputeContext current checkExpired. MongoTalk>>other "Sleep for up to the remaining time out (someSemaphore waitTimeoutContext: ComputeContext current) ifFalse: [ SomethingExpired signal. ] 2.) Cancellation More difficult to write in pseudo code (without TaskIt?). In my above case we are waiting for the database to be ready while the client already closed the file descriptor. Now we are not able to see this until much later. The idea is that in addition to the timeout we can pass a block that is called when an operation should be cancelled and the ComputeContext can be checked if something has been cancelled? The above takes inspiration from Go's context package[3]. In Go the context should be passed as parameter but we could make it a Process variable? Question: How do you handle this in your systems? Is this something we can consider for Pharo9? thanks holger [1] It has the concept of "replicationSet" and works by having a primary, secondary and arbiters running. [2] For every write one can configure if the write should succeed immediately (before it is even on disk) or when it has been written to multiple stores (e.g. majority, US and EMEA) [3] https://golang.org/pkg/context/
[Pharo-users] Workspace/Playground behavior change for unknown "bindings"?
Hey, I recently showed Pharo to a friend and one thing I like doing is to open a playground/workspace and then write and execute: Person new name: '...'; age: 2342; yourself In old versions of Pharo I would get a popup like "Person" not known and asking me if I want to create a class. In Pharo6.1 I did get an error. Was this intentional? holger
Re: [Pharo-users] Workspace/Playground behavior change for unknown "bindings"?
> On 7. Feb 2018, at 08:58, Marcus Denker wrote: > > Ok, pull request for Pharo7: > > https://github.com/pharo-project/pharo/pull/806 > > we should add that to Pharo6, too (Slice is already in the inbox) > lovely! Thank you!
Re: [Pharo-users] Help Petit Parse
> On 12. Mar 2018, at 23:07, Pau Guillot wrote: > > (#word asParser , #digit asParser plus flatten ==> [:node | node second]) > star parse: 'A123A123'. > -> #('123' '123') (((#word asParser , #digit asParser plus flatten ==> [:node | node second]) plus) ==> [ :node | '' join: node]) parse: 'a123a123' => '123123' probably there is a nicer way. ;)
[Pharo-users] Namespaces and ASN1 types in Cryptography package
Hi, I was debugging some test failures and it turns out that my code defines an ASN1IntegerType and the Cryptography package (a dependency of MongoTalk which is loaded into my code) has such a class as well. Thanks to Epicea I could see which package added the method but now I have no idea how to resolve the problem. Mongotalk needs PBKDF2 for modern authentication... Could Cryptography use "ASN1-Model"[1] instead? This is a rather complete[2] ASN1 implementation and used in production for some years. Could the classes be prefixed? If not how can I instruct Metacello to not load a certain package? holger [1] http://smalltalkhub.com/#!/~NorbertHartl/ASN1/source [2] Rather complete parser for ASN1 files but only encoding/decoding for DER/BER (none of the modern ones like aper/uper
[Pharo-users] Right repo for TaskIt and features?
Hey! I look into using Taskit for a new development and wondered about some features. What is the right upstream repository? What are the goals to get builds greens? I wondered if somebody thought of remote task execution? What I am missing is handling for overload. E.g. before queuing too many tasks I would prefer an exception to be raised (or the task blocking/slowing down). Signalling an exception is probably more reasonable as one task could queue another task (while >>#value is being executed...). What are the plans here? I can mitigate by always using futures and using >>#waitForCompletion:.. Are there ideas how to add remote task scheduling? Maybe use Seamless for it? Have workers connect to the scheduler? Other ideas? Who would have time to review an approach and the code? cheers holger
Re: [Pharo-users] Right repo for TaskIt and features?
> On 24. Apr 2018, at 20:16, Santiago Bragagnolo > wrote: > > Hi Holger! > I respond in bold hehe. And in the reply I am back to non rich text. Let me see if I quote it correctly. > > > > On Tue, 24 Apr 2018 at 12:00 Holger Freyther wrote: > Hey! > > I wondered if somebody thought of remote task execution? > > *If you mean something else, I would need more information :). > When you do [ action ] schedule / [ action ] future, both created tasks are > scheduled into the default runner. The default runner is a working pool with > a default 'poolSizeMax' on 4, meaning, limit 4 processes working over the > tasks. (this is a dynamic configuration, you can change it by > TKTConfiguration runner poolMaxSize: 20. ) Yes. But with more work than the workers can handle the queue will grow. Which means the (median/max) latency of the system will monotonically increase.. to the point of the entire system failing (tasks handled after the external deadlines expired, effectively no work being done). For network connected systems I like to think in terms of "back pressure" (not read more from the socket than the image can handle, eventually leading to the TCP window shrinking) and one way of doing it is to have bounded queues (and/or sleep when scheduling work). I can see multiple parts of a solution (and they have different benefits and issues): * Be able to attach a deadline to a task (e.g. see context.Context in go) * Be able to have a "blocking until queue is less than X elements" schedule (but that is difficult as one task might be scheduled during the >>#value of another task). > Are there ideas how to add remote task scheduling? Maybe use Seamless for it? > Since you speak about seamless here, i suppose two different images, doesn't > matter where. > It's not a bad idea to go seamless, but i did not go through the first > restriction of remote executions (if the remote image can or not execute the > task and if both images share the same semantic for the execution), then i > did not yet checked on which communication platform to use for it Right it would need to be homogeneous images (and care taken that the external interface remains similar enough). > Have workers connect to the scheduler? Other ideas? > what do you mean by connection to the scheduler? The workers we use do not > know their pools, if that is what you mean. Let's assume scheduling a task is simple (read something from a socket) but the computation is expensive (database look-up, math, etc). Hopefully one will reach the point where one image can schedule more tasks than a single worker image can handle. At that point it could be neat to scale by just starting another image. By inverting the launch order (workers connect to the scheduler) scaling can become more easy. holger
Re: [Pharo-users] Right repo for TaskIt and features?
> On 25. Apr 2018, at 08:42, Andrew Glynn wrote: > > Generally to avoid this I've used the Synapse micro service bus. It also > allows the creation of an unlimited number of queues, allowing higher > priority tasks to "jump the queue". ' Backpressure' is precisely what > message buses avoid in distributed computing. Can you elaborate and point to which Synapse you are meaning? If you use transport protocols like TCP (in contrast to QUIC or SCTP) there will be head-of-line blocking, how do you jump the queue on a single TCP connection?
Re: [Pharo-users] Right repo for TaskIt and features?
> On 24. Apr 2018, at 23:31, Santiago Bragagnolo > wrote: > > > Yes. But with more work than the workers can handle the queue will grow. > Which means the (median/max) latency of the system will monotonically > increase.. to the point of the entire system failing (tasks handled after the > external deadlines expired, effectively no work being done). > > > Normally the worker pool adjust to the minimal needed workers (there is a > watch dog checking how much idle processes are there, or more workers are > needed, and ensuring to spawn or stop process regarding to the state). > So, the number poolMaxSize is just a maximal limit. This limit should be set > for ensuring that the tasks that are running concurrently are not incurring > into too much resource consumption or into too much overhead leading to kind > of trashing. > I am not really friend of setting only a number for such a complex > problematic, but so far is the only approach I found that it does not lead to > a complex design. If you have better ideas to discuss on this subject, i am > completely open. (the same to deal with priorities by general system > understanding rather than absolute numbers) I think we might not talk about the same thing. Any system might end up being driven close or above its limits. One question is if it can recover from it. Let me try to give you a basic example (and if one changes from 'dev' to a proper work pool one just needs to adjust timings to show the same problem). The code schedules a block that on each invocation takes about one second to execute. But the completion time is monotonically increasing. | completions | completions := OrderedCollection new. 1 to: 1000 do: [:each | | start | start := DateAndTime now. [ (Delay forSeconds: 1) wait. completions add: (DateAndTime now - start) ] schedule. (Delay forMilliseconds: 200) wait. ]. completions Now why is this a problem? It is a problem because once the system is in overload it will never recover (unless tasks are being stopped). The question is what can be done from a framework point to gracefully degrade? I am leaving this here right now. holger
Re: [Pharo-users] libfreetype6 missing in the dependencies of ubuntu Pharo distribution
> On 27. Apr 2018, at 04:55, Peter Uhnák wrote: > > Hi, > > I've just tried installing pharo from package manager on Elementary OS, which > is a ubuntu derivative. > > All went well, except fonts weren't working in 32 bit version, and I had to > install by hand "libfreetype6:i386" ... is it missing in the dependencies? which package did you install? holger
[Pharo-users] FileStream deprecation in Pharo7
tl;dr could we extend PackageManifest>>#isDeprecated to provide reasoning and pointers to potential replacements? I was loading some of my code into a Pharo7 image and while debugging noticed that FileStream is deprecated (text stroke through in the Playground). But neither the FileStream class comment nor the the ManifestDeprecatedFileStream have an indication of what to use instead. What do you think of making the deprecation notice carry more signal? E.g. provide reasoning of why it was removed ("simplification", "no replacement for XYZ", "Look at XYZ")?
Re: [Pharo-users] FileStream deprecation in Pharo7
> On 27. Apr 2018, at 21:13, Sven Van Caekenberghe wrote: > > Holger, Sven! > The answer is simple: use FileSystem, it has been in the image for years. > > The 'Deep into Pharo' book has a chapter about it. > > Sven > > (Apart from that, you are right: we can always write more documentation). thank you for the answer and the pointer. I wondered how we could make the replacement discoverable in the image? It can be documentation but maybe we can make the Manifest more expressive? holger
Re: [Pharo-users] [Pharo-dev] libfreetype6 missing in the dependencies of ubuntu Pharo distribution
> On 27. Apr 2018, at 19:22, Peter Uhnák wrote: > I've installed both ( apt install pharo6-32-ui pharo6-64-ui ), but the font > problem was present only for 32bit. > > Note that I was launching the VM packaged with ZeroConf installation, I've > only used apt mainly to get all dependencies. > > Now I tried to also run "pharo6-32-ui Pharo.image" (and "pharo6-64-ui > Pharo.image"), > and in both cases I got errror > > "/usr/bin/pharo6-32-ui: line 6: /usr/lib/i386-linux-gnu/pharo6-vm/pharo: No > such file or directory > > and > > "/usr/bin/pharo6-64-ui: line 6: /usr/lib/x86_64-linux-gnu/pharo6-vm/pharo: No > such file or directory What's missing is that pharo6-32-ui should depend on pharo6-32. And the later package has the dependency on libssl and libfreetype and provides the "pharo" binary. thanks for pointing that out and sorry you had to experience it.
[Pharo-users] Pharo70 session start and silent failures
I am facing a problem with the new SessionManager>>#snapshot:andQuit: code. I have had plenty Pharo70 images that didn't restore anymore as the code is waiting for the "wait" semaphore. For sure it is something my code is doing but could anyone think of ways to make it more robust and handle failures more gracefully? My main concerns are: * When the failure becomes noticeable it is too late. :( * It fails silently. Maybe WorkingSession>>#runStartup: shouldn't rely on the UIManager doing the right thing (before the UI was fully initialized?) * Debugging is hard, there is no indication of why it broke, and getting to the situation of breakage takes a bit of time (installing the baseline..). holger
Re: [Pharo-users] Pharo70 session start and silent failures
> On 14. May 2018, at 17:39, Guillermo Polito wrote: > > > Can you give me more details about how to reproduce it? 1.) Load MCZ into image (ignore the missing dependency) http://smalltalkhub.com/mc/osmocom/Core/main/OsmoCore-HolgerHansPeterFreyther.43.mcz 2.) Save image => Image is now "frozen" (with "save and quit" it would be broken for good) > You're loading a baseline as a startup action? As a startup script? I start and stop processes and Denis has pointed out to have had issues with it as well.
[Pharo-users] gRPC for pharo?
Hey, I have collected some experience using gRPC[1] and would like to make clients and be a server from Pharo. After digging into the gRPC implementation it seems feasible[2]. Would someone be interested in collaborating on an implementation? cheers holger [1] An RPC framework using HTTP2 and sending Google Protobuf serialized data. It is used by etcd (distributed key value store, master election), apparently containerd (of docker) and the list is probably increasing. It has interesting load balancing support... [2] Wrap the C-API with with FFI, modify the gRPC protobuf compiler to generate Pharo classes with slots for the native types, create a polling Pharo Process to pull the completion_queue. With ThreadedFFI we could have lower latency and have a blocking call on the completion queue.
Re: [Pharo-users] About the IoT Hackathon last Friday
> On 17. Nov 2018, at 14:15, Norbert Hartl wrote: > > To get a better impression we made an image film of the event. Now the 4K > version is available on youtube. > > https://www.youtube.com/watch?v=dIl9FAatKyw neat!
Re: [Pharo-users] MongoCursor>>execute raising 'Unexpected responseTo in response'
> On 17. Dec 2018, at 01:04, Sebastian Sastre > wrote: > > Hi All, Holger... The Mongo protocol allows multiple requests to be in transient and is using a client assigned id to indicate which request was responded to. MongoTalk is only fit to send/handle one request at a time per socket. Without more context it is difficult to say which failure mode you have but I can see these possible ones: 1.) Due to concurrency you are using the same connection but make more than one request. 2.) Our request id goes wrong (not sure how would this happen) 3.) Mongod sent a unsolicited response 4.) Many other things I can't think of right now. Is 1st) possible in your code? Do you have a packet trace to see which _responds_ you received and which request was answered? holger > I’ve seen this today: > http://forum.world.st/MongoCursor-gt-gt-execute-and-MongoTalk-changes-td4889293.html > > > After talking with some people in the Pharo chat group at Discord about this > erratic error I’m having with MongoTalk’s MongoCursor . > > > > Once it happens, the stream stays open and Mongo says isValid true but no > other operations can be executed. > > I’ve loaded MongoTalk with: > > Metacello new > githubUser: 'pharo-nosql' project: 'voyage' commitish:'1.?' path: 'mc'; > baseline: 'Voyage'; > load: 'mongo’. > > in a Pharo 6.1 image. > > Did this Unexpected responseTo happened again to you? > > nextRequestID has the code you mention in the issue: > > nextRequestID > ^requestID := requestID + 1 bitAnd: 16r3FFF > > Do you have any further hint on why the issue? Thanks! > > > Sebastian
Re: [Pharo-users] RSA and cryptography
> On 23. Jan 2019, at 12:55, Norbert Hartl wrote: > Hi! > Is there anyone having crypto algorithms implemented or used a native lib to > do so. We are using the Cryptography package and have problems. The code is > quite old so we won’t spend time fixing it. We rather interface with a native > lib. So I’m asking upfront is https://download.libsodium.org/doc/quickstart providing the functions you require? In the past I have used some of the Pharo/Squeak bindings for it. holger