hide options passed to a script
If I pass options to a script or to something similar like this: some_sql_stuff.sh $username $password $database The content of the variables can be viewed with top or ps while the script is running. If I don't remember completely wrong from the bad old days the text that was displayed was actually the variablename and not its content. Or do I have to do some tricks to hide the text that I don't remeber? I do know that the proper solution would be not to pass the options at all but some pieces of software insists on having the credentials passed as strings to stdin. -- Please consider the environment before printing this email
Re: HowTo gpio with com-port?
I spent some time trying to toggle the pins in the serial port in various ways. The easiest way for me was to install pyserial and to control the pins in python takes only two or three lines of code. This is neat if you just want to do some basic stuff. Jan Klemkow wrote: Hello, I want to get some signals from a electronic circuit at my serial-port com0. I don't know how to attach the pin's from the serial port with the gpioctl tool. I think it my hardware is not supported, but I don't know exactly. In my dmesg there is nothing like this: Please consider the environment before printing this email
mountd occupies port 993
mountd and and imaps occupies the same port 993. Are the any good ways of telling openbsd that mountd should not use that port. The quick n'dirty solution is to kill mountd in rc.local and start it up again after the imap mailserver has occupied the port and then start up mountd again. An other solution would be to tell the mailserver software to listen to some other port and use pf to redirect it but that seems unneccesery if a better solution exists. Any ideas? -- Please consider the environment before printing this email
Re: mountd occupies port 993
I am running 4.3 and the problem arised after upgrading from a previous version. Well spotted :-) Thanks a lot! Philip Guenther wrote: On Thu, Jul 2, 2009 at 1:18 AM, Per-Erik Persson wrote: mountd and and imaps occupies the same port 993. Are the any good ways of telling openbsd that mountd should not use that port. ... Upgrade to OpenBSD 4.4 or later, as that version made /etc/rc automatically tell the kernel to not assign to dynamic services any of the ports mentioned in /etc/services, and imaps has been in /etc/services since 3.something. If you're running 4.4 or later, please verify that 1) /etc/services contains "imaps 993/tcp" 2) you're running the stock /etc/rc 3) the output of "sysctl net.inet.tcp.baddynamic" contains ',993,' 4) that you're running a stock mountd Philip Guenther -- Please consider the environment before printing this email
Re: About Xen: maybe a reiterative question but ..
I might be flamed for this statement but not being able to run inside a virtualized environment is not an option in the future. Most servers you can buy today are to powerful for only taking care of one task. It is really handy to be able to "shuffle" around the cpu:s to the virtual machine that needs it at the moment. OpenBSD is much to powerful to be used only on soekris and wrap boxes as a firewall for the homeuser. If OpenBSD doesn't adopt to the virtualization trend it will used only as an obscure firewall box. If I need to run linux as Dom0 to be able to put most of my OpenBSD machines into one single box(well two actually if you want failover, and that you probably want) The security sacrifice is OK to me, at least knowing that the option is to not run OpenBSD at all since I would need too many machines and to much electricity and force me to build a new serverroom. The firewall and the KDC will probably not be virtualized yet, but everything else will soon be. Luca Corti wrote: On Tue, 2007-10-23 at 01:11 +0200, ropers wrote: unavoidable. The question is, is that a worthwhile trade-off? Is this a reason not to support Xen? Or should the user be given that option regardless of the inherent limitations and consequences? A proper Dom0 port of XEN to OpenBSD would solve this by removing the linux dependency. However this would probably require a significant effort on OpenBSD side and a XEN Hypervisor code audit. Also from earlier discussion on the list it seems this kind of virtualization may impact on security, which is in direct contrast with OpenBSD goals. Can someone elaborate more on this? ciao Luca
Re: reading sensor RS-232/485 output
I don't have any webpages to throw at you but converters from rs232 to rs485 exists. Also plugins cards to soekris that I would assume to be working. I have a lot of stuff I plan too hook up to OpenBSD, but have not found a good way to get the data out without writing to much code. It feels like reinventing the wheel each time. If anyone knows of an easy way to add hooks to sysctl that can be monitored by the sensorsd framework without hacking the kernel I would be really happy to know. Jacob Yocom-Piatt wrote: i am planning on pulling live rate data from some manufacturing equipment using a red lion rate meter with RS-232 or 485 interface http://www.redlion.net/Products/DigitalandAnalog/Counters/CounterRate/CUB5.html what is the best way to pull this data, using base OS utilities if possible? if coding this is most expedient, handing me a pointer to a useful information address is sufficient. i'm under the impression that openbsd doesn't support RS-485 interface cards. do correct me if i'm wrong here. cheers, jake
Re: The OACK Project
This rings a bell to me. I don't know if it still is true but "a while ago" tftpd was binding to the networkcard it found first. Try to run it on a machine that only has one networkcard and see if it works better. If you look at older postings you will probably find the exact problem. Howerver what you describe might be another problem, but I spent a log of time trying to get an old mac to boot via tftp and never succeded until I accidently hooked the client up on the other networkcard Jonathan Eifrig wrote: Rogier Krieger wrote: On 1/24/07, Jonathan Eifrig <[EMAIL PROTECTED]> wrote: tftpd[]: oack: Permission denied That may have something to do with *file* permissions. Quoting tftpd(8): "The use of tftp(1) does not require an account or password on the remote system. Due to the lack of authentication information, tftpd will allow only publicly readable files to be accessed." Are the files you're trying to serve world-readable? Yes. :-) As I said, the problem is client-specific: a tftp client running on the same machine as the server can retrieve files with no problem. Clients on remote machines timeout. It's as if the tftpd process is not allowed to use eth0 or some such.
Is it possible to fix a stale NFS hadle without rebooting?
When the nfs server gets disconnected the filesystem dissapears, I can live with that. After all networks go down now and then. But unfortunatley the location where the directory was mounted will be impossible to list, even after the server is up again. Trying to unmount ot mount the directory will also fail and just freeze the console. df dousn't work either. I have tried to kill mountd, nfsd and rpc.lockd and to empty /var/db/mountdbtab and bring the daemons up again but still the problem persists. Tcpdump tells me that the machines doesn't even try to reconnect the lost connection. The last opton is to reboot, but there must be a better solution to a busy server! Would amd solve this problem instead of mounting the shares in fstab ?
alix 2c3 and i2c
A while ago I purchased an alix board. The plan is to hook up some external i2c sensors to it. I see the i2c-header on the board, but while reviewing the dmesg I cannot find anything related to i2c. Has the header no real function or is the driver for the i2c bus not written yet or do I need to enable it in some way? Reading the code under i2c gives me hints about bitbanging the gpio, but that is just guessing.
nfs failover in openbsd
Earlier on the list there have been discussions on setting up failover solutions with carp. I think most people agree that carp does a wonderful job. However there seems to be problems with nfs servers that needs a little bit more work. I can find information about nfsv4 and syncing files with rsync. But no followups saying that it actually works and how it should be done. Is it possible to get it up and work proberly in OpenBSD? I have seen some linux solutions but they look really ugly.
How to get syslog to trigger an event.
A long time ago I used the following setting in syslog.conf *.crit |mail -s "blablabla" [EMAIL PROTECTED] But it doesn't seem to work nowdays. I suspect the chrooting of syslogd might have something to do with it. Is there some other very obvious way that I have missed to get a hint from sensorsd that some of my computers are overheating. I suspect my AC is not cool enough to get me thru the summer.
Re: Filesystem redundancy
AFS would handle your storage in a redundant and distributed way where you "easily" can add and remove a machine. But this is not a thing you set up in an afternoon :-) People seems to be afraid of it since it's complexity. But when the work is done you wonder why people pay huge amounts for NAS and similar things that sometimes doesn't work nearly as good as the glossy brochure promised. It scales good but the performance I don't know about. A while ago there where some discussions on the list about openafs, has someone written a complete or at least half done installation guide yet? Julian Smith wrote: I've been wondering about how to cope with random hardware failures when data is being received from a WAN and written to local storage. As I understand it, CARP(4) will enable any one of N machines to handle incoming requests, so hardware failure of up to N-1 machines will be handled. But if each of these machines writes received data (e.g. emails) to a shared hard drive, then we are back to a single point of failure (if each machine writes to its own individual hard drive(s) then we end up with no sharing of data). We can make the drive use RAID, but RAID controllers can also fail. One way of handling this would be to write a filesystem that copies the contents of modified files over a network before close() returns. That way, as long as a SMTP server (say) checks the return from close() before telling the sender that it has received everything ok, we can avoid any single point of failure. If the data is copied to all the other machines in a CARP `family', then we should end up with perfectly syncronised machines, each of which can take over at any time. The obvious downside is potential speed problems. This has the nice property that the unit of replacement is individual machines, with no need for complicated and expensive hardware like Network Addressed Storage/RAID. If something fails, install a fresh machine, sync its hard drives a few times with one of the other machines (whose contents will be changing due to incoming data from the WAN), temporarily turn off the WAN, sync a final time, and restore the WAN. I've written a simple test library dupfs that does this by intercepting open() and close() with LD_PRELOAD, using system( "rsync ...") to do the synronisation, and it works in trivial test cases. Any simple-minded file-locking by dupfs would lead to deadlock I think, so something else (CARP?) would have to ensure that only one of a number of machines was active at any time. I expected there to be standard solutions to this sort of problem, but I was unable to find anything which didn't involve expensive hardware. ISPs seem to accept that they will suffer downtime due to hardware failure, and occasionally lose emails. So, am I barking up the wrong tree here? What am I missing? - Julian
OT: Re: Help with lpd and XP
One reason I have read about is that there where problems with buggy printservers that did not clear the out downloaded fonts and other things.(especially these on a laserjet) Setting the the filesize to something "invalid" would lead the software to do a small soft reboot and clear up settings from the previous printjob. A really ugly solution that could lead the unixbased printserver to start to buffer for a 4Gb file in the worst case. Greg Thomas wrote: On 12/11/05, Garance A Drosihn <[EMAIL PROTECTED]> wrote: At 10:25 AM -0800 12/4/05, Greg Thomas wrote: On 12/4/05, Steve Murdoch <[EMAIL PROTECTED]> wrote: > > Any issues I had printing from XP went away when I enabled > LPR Byte counting in the LPR port settings. Any ideas why that is? Apparently Windows (in general) does not like to keep a byte-count for a file. It is not a saved attribute of a file, so "something" (I don't know what) has to count the bytes. This is overhead, so it defaults to off. I know little about windows, so that description might not be 100% accurate. However, I do know about unix implementations of lpd. When a file is transferred, the remote side first says how many bytes it is going to transfer, and it then transfers that amount of data. The RFC for lpr implies that you can put in a zero for the length, in which case lpd will just keep reading until the end-of-file condition. But in fact there are no implementations of lpd for unix which actually do that (well, none that I've noticed at least. I guess lprNG might, I haven't checked that one). If you tell lpd you're going to send zero bytes, then by golly it thinks you will send a zero-byte data file. So if you don't turn on LPR byte-counting, then these Windows implementations will send the 'count' field to zero, which should work according to RFC 1179, but won't in fact work with most implementations of lpd for Unix. Cool. Thanks for the explanation and it makes complete sense because the queue on the server always stuck at 0 bytes. I do know that the lpd on the little wireless print server I have doesn't require byte counting from XP boxes. Greg
ypldap and samba
I finally got around to start using ypldap on openbsd 4.5 It works, thanks! However using Samba as an ads member together with ypldap doesn't seem to do it. I still need to add the accounts to passwd to make samba work. Going thru misc gives me some hints, but has anyone actually got it working together?