LINUX GAZETTE

April 2001, Issue 65       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors  |  Search

Visit Our Sponsors:

Linux NetworX
Tuxtops
eLinux.com

Table of Contents:

-------------------------------------------------------------

Linux Gazette Staff and The Answer Gang

Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm], http://www.linuxgazette.com/
This page maintained by the Editor of Linux Gazette, gazette@ssc.com

Copyright © 1996-2001 Specialized Systems Consultants, Inc.

The Mailbag



HELP WANTED -- Article Ideas

Send tech-support questions, answers and article ideas to The Answer Gang <linux-questions-only@ssc.com>. Other mail (including questions or comments about the Gazette itself) should go to <gazette@ssc.com>. All material sent to either of these addresses will be considered for publication in the next issue. Please send answers to the original querent too, so that s/he can get the answer without waiting for the next issue.

Unanswered questions might appear here. Questions with answers--or answers only--appear in The Answer Gang, 2-Cent Tips, or here, depending on their content. There is no guarantee that questions will ever be answered, especially if not related to Linux.

Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.



I've downloaded the ISO file. Now what do I do with it? I've burned a CD and it won't boot with it.

Thu, 8 Mar 2001 03:29:25 -0000
EK(endak from hotmail.com)

Dear Answer Guy,

I really hope you can help me in my quest to change from Windows to Linux. Here is what I have done so far:

I've got a 2 year old standard PC: PII 333MHz, 128MB RAM, 4.3GB, CD-ROM, Modem, ESS soundcard.

I've downloaded the ISO file from SuSE and it is called live-evaluation-i386-70.iso

I've put this onto a CD. I can see the file on the CD under Windows. When I try to boot up using the CD ROM it ignores it. My BIOS is seto to Boot From CD and it will boot from a Windows OS CD so I know the capability is there.

I'm starting to feel that there is something I need to do to the ISO file when writing it to the CD. I've seen mention of doing this by "Burning the Image" - whatever that is. I've read the HOW-TO for CD and ISO and it doesn't explain how to do this under Windows only. It assumes you have a working Linux environment.

I came across your page, and dozens of others on this subject, and I thought I had found the holy grail as the question is exactly what I was going to ask. The thing is your answer is for a working Linux system - the guy had this as well as Windows - there is no mention of how to do this under Windows. Then the person who asked the original question writes back saying:

"Jim,

Thanks for your information. And ironically, shortly (next day) after I wrote you the email, I did find out what was going wrong and how to fix it.

WinOnCD did have the capability, but it was somewhat a "hidden" feature of sorts.

I do appreciate your response, though.

-Lewis"

He didn't even mention how he did it in WinOnCD!!!! ARRggghhh! I can find this software on the Web but I still need the knowledge to find the "hidden feature". Oh this is so frustrating., so close yet so far.

Please please please can you help me to make my ISO file, which took hours to download, into a bootable CD so that I can install my first ever Linux OS?

Thanks, EK.

To trim it down considerably, he has tried Nero Burning software but couldn't make sense of it, and the image that seems to result doesn't work. YaST2 booted from floppy gets as far as reaching for the CD, then continues to complain that the CD isn't valid.
Usually the Gang would suggest a rescue disk - our perennial favorites wouldn't help, they don't have cdrecord aboard, but you could try muLinux (http://mulinux.nevalabs.org/) boot that from floppy, and use the Linux software on it to mount up your DOS filesystem, then cdrecord according to the normal HOWTO to burn the SuSE CD. I have no idea if it would work though it seems worth a shot.
Would some kind soul out there who lives in both worlds point us to a reliable CD burning app for Windows, along with some fairly simple instructions? We'd be glad to let you put the article in the Gazette if it will help enough potential Linux'ers out there. -- Heather


Fetchmail question

Mon, 19 Mar 2001 08:47:25 -0800
Rodrigo P Gomez (rpgomez from yahoo.com)

I read Ben Okopnik's article in the February issue of Linux Gazette titled

No More Spam! (a "procmail"-based solution with tips on "fetchmail" and "mutt")

and I tried implementing some of his suggestions. My problem seems to be this: While fetchmail will get my e-mail, and pass it off to procmail, which delivers it to a designated file, I can't seem to read the e-mail with kmail. I've set my designated file as a local mailbox for kmail to read, but when I try to read it using kmail I get 1 of 2 behaviors:

  1. kmail ignores the e-mail in the local mailbox
  2. kmail nukes the contents of the local mailbox, and does not transfer the contents to my kmail inbox.

I tried setting up kmail to read from the local mailbox with the following lock file options:
mutt dotlocked
mutt dotlocked priveledged
Procmail lockfile
FCNTL
None

Any of the above will cause either behavior 1) or 2).

Anyway, I'm hoping you might be able to help me figure out how to read my local mailbox with kmail.

--Rod
P.s. Included is a snapshot of my kmail configuration for reading the local mailbox into my inbox. Hopefully, it will be of some use to you for diagnosing my problem.
P.p.s. Thanks in advance for any help you can give me on this.
P.p.p.s. Here are the configuration files for the various utilities: (I blanked my password in the .fetchmailrc file in this e-mail, for security reasons ;-))


nuovo progetto

Thu, 15 Mar 2001 16:38:46 +0100
frangregorio (fbaril from tiscalinet.it)

Cari amici della gazette mi chiamo Francesco Barilà Insieme ad alcuni amici vorremmo creare uno shell-wordprocessor emacs based: qualcuno di voi è interessato?
Se lo siete mandate un email a fbaril@tiscalinet.it oppure visitate linux.interpuntonet.it/angolinux al tread "nuovo progetto"
grazie
Francesco

Dear gazette- friends I am Francesco Barilà I and my friends want create a shell- woprdprocessor emacs-based: is anyone interested?
mail to fbaril@tiscalinet.it or visit linux.interpuntonet.it/angolinux and see the tread "nuovo progetto"
thanks
Francesco Barilà

Let him know what you think, folks. For my own part, we could still use a wordprocessor that actually works... -- Heather


syslog-ng

Tue, 27 Mar 2001 18:16:54 +0200
Greg (greg from hellea.be)

Hi, I'm fighting with syslog-ng, trying to centralize the logs of all a network to a log server. Till now, I'm justing testing it between two machines ... I've already been on it a long time and still no result, simply nothing appears in the final log on the logserver ... I've attached the syslog-ng.conf of the client and the server's one too.

You must know that for the server's source, I've already tried to enter parameters (I mean an ip and a port) but it doesn't work either so at the moment, I'm trying to use the defaults. The same thing for the client ...
Can you help me on this?
Thanks


rebooting using rsh command

Mon, 26 Feb 2001 11:40:46 -0500
rob (rob from esgi.com)

AnswerGuy,

I hope you can help me with this problem.

We've upgraded our kernel from 2.0.0 to 2.2.14-5.0 (Red Hat 6.2). We have a master node that is connected to a total of 14 nodes. In the past, when we rsh'd into a node and wanted to reboot we would just type at the prompt 'reboot'. We would immediatly be kicked back to the master node prompt while the other node was rebooting. This worked out fine.

However, when we rsh into a node and type 'reboot' the terminal hangs, for several minutes, until the node has completed rebooted and then it drops us back to the master node. We want it to work the way it has in the past. Is there some file or configuration I'm missing, some flag to turn off or on?

Any of you gentle readers with experience in this kind of clustering, if you have any idea what it might be, then feel free to lend a hand. We'll publish your answer here. -- Heather


query

Sat, 3 Mar 2001 15:44:33 +0530
Mehul Vora (mehul from now-india.com)

Hii james, this is mehul from india...

need to know if i can control bandwidth on a particular interface using ipchains or any other utilities in linux... basically, ill have a multi-homed linux workstation and i want to limit bandwidth on one interface to 512kbps... is that possible?

T&R
mehul


Defining Keyboard Shortcuts

Sun, 04 Mar 2001 23:04:54 -0500
Daniel S. Washko (dann from thelinuxlink.net)

I would like to define keyboard shortcuts not specific to a Window Manager. I have read documentation on xmodmap, but the information I have found only talks about and show examples of changing key actions to another pre-defined action. That is, being able to change the caps lock key to function as an escape key. How would I go about assigning a command to a lctrl + F1 key code?

The ultimate use for this is to help a fellow LUG member. He wants to be able to assign text strings to "hot keys" whereby the text string would be copied to the window he is working in. He wants to be able to save the text strings as he is working so that they can be used in other session; and have the ability to delete or change these text strings. I can proably figure out a script to do this, but I am stuck on defining the keyboard shortcut.

Thanks

Daniel S. Washko


upgrade2.2.14 to 2.4 kernel documention

Mon, 12 Mar 2001 13:19:29 GMT
Mark Taylor (mky from talk21.com)

hello,

is there any kernel upgrade from 2.2.1# to 2.4 how-to 's. I have upgraded the kernel using a general how-to and I have problems mounting the vfat partitions by getting the error "invalid major and minor numbers". I know the way the system deals with special device files has changed at 2.4. Also the "eth0" ethernet adapter is not being recognised. Any help in finding good documentation for this process would be greatly appreciated.

mark taylor


Linux Sockets Stuck in FIN_WAIT

Mon, 19 Mar 2001 10:09:45 -0500
Ken Ramseyer (ken.ramseyer from lmco.com)

On our project if an established client socket connection on a remote chassis is suddenly terminated (e.g., the chassis is powered off), the socket connection on the local chassis changes from ESTABLISHED to FIN_WAIT1. If we then try to restart our application on the local chassis, it does not work because the socket connection is stuck in a FIN_WAIT1. After 5-10 minutes the socket connection stuck in FIN_WAIT1 clears itself and we can successfully restart our application on the local chassis, but this wait is too long.

Do any of you know how to expedite the process of clearing FIN_WAITs on a Linux/UNIX chassis under these conditions? The only we can get them to clear is to either wait 10 minutes or perform a chassis reboot (sync; shutdown -r now). Is there a system call that can be used to tell the operating system to close/delete/clean-up/remove all socket connections immediately?

The only thing I have run across so far is to possibly make a kernel change (e.g., change #defines in .../include/net/tcp.h), or set a socket option which causes the sockets to close/clean-up faster. Note: these changes may be risky if we shorten the timeouts too much.

Any help would be appreciated.

Thanks,
Ken Ramseyer


CD-Writing with an ATAPI CDR Mini-HOWTO

Wed, 21 Mar 2001 22:12:22 +0000
Louise Plumpton (louiseplumpton from homewitheva.fsnet.co.uk)

Hi
I've read the articles from issue 57 of the Linux gazette, but am unable to get my cd-rw to work. I don't think I am managing to emulate scsi correctly, although I have followed the things sugested. I have a sony cd-rw (CRX 145E ATAPI) and run mandrake 7.0, I also have a iomega 100Mb zip drive on hdb, a dvd-rom on hdc and the cd-rw is on hdd. This is what I've done and what the comp says:

In /etc/rc.d/rc.local added

/sbin/insmod ide-scsi

In /etc/conf.modules added

alias scd0 srmod
alias scsi_hostadapter ide-scsi
options ide-cd ignore=hdd

(also tried replacing srmod with sr_mod)

In /etc/lilo.conf added

append="hdd=ide-scsi"

then in console typed

lilo (and tried /sbin/lilo)

then rebooted, then dmesg gives at the end:

hdb:<3>ide-scsi: hdb: unsupported command in request queue (0)
end_request: I/O error, dev 03:40 (hdb), sector 0
unable to read partition table
scsi0 : SCSI host adapter emulation for IDE ATAPI devices
scsi : 1 host.
Vendor: IOMEGA    Model: ZIP 100           Rev: 14.A
Type:   Direct-Access                      ANSI SCSI revision: 00
Detected scsi removable disk sda at scsi0, channel 0, id 0, lun 0
sda : READ CAPACITY failed.
sda : status = 0, message = 00, host = 0, driver = 28
sda : extended sense code = 2
sda : block size assumed to be 512 bytes, disk size 1GB.
sda:scsidisk I/O error: dev 08:00, sector 0
unable to read partition table

For some reason I can't get the machine to emulate scsi on anything other than hdb. cdrecord -scanbus only lists the zip drive too. The /sbin/insmod ide-scsi comand also stops the zip drive from working. Have you any ideas as to what might be going wrong?

Many thanks for any advice you can offer
Louise


GENERAL MAIL



Sci-Linux Project has a listed e-mail address ..

Mon, 26 Feb 2001 10:20:53 -0500 (GMT)
Manoj Warrier (manoj from ipr.res.in)

Hi.

This is in reply to ghaverla@freenet.edmonton.ab.ca's mail about scilinux.freeservers.com not having a e-mail address on the page. In fact at the bottom of the page before the disclaimer we have links to contact us. It appears when i look at it from India On Netscape. I am one of the maintainers and please do contact me for your suggestions, ideas, flames, etc ..

We are just crystallising our thoughts after the fruitful discussion with TAG (Jan 2000 issue) and have decided to first install all the relevant packages on a RedHat partition of our PC, find the library dependencies, make a "library farm" and use ldconfig after adding the libraries in the library farm to the /etc/ld.so.conf. To check what problems this gives we will then make a "zeroth" version of our CDROM and installation script and try installing all on the SLACKWARE (my dream distro) partition of our PC. Test the software, check if anything else goes wrong, etc .. It seemed to work for scilab and octave when I did a manual check. From what I have read till now about libraries (not much at all, maybe I should read up and write an article for the LINUX GAZETTE -> best way to learn something) this does not seem to be a wrong thing to do.

Thanks, Manoj, we'd love to see your article! -- Heather

Another thing is we are not trying to make a platform independent package or installer, but just want to install some selected scientific software packages on any old/new Linux PC. We hope to make a zeroth version, test it at our institute PCs and only after thorough testing keep it for download at some voulenteering site or mail it to anyone who pays the posting and CDROM cost.

PS: We are still looking for some server to host our site. We do thank freeservers, but the it would be nice to host the site at some server that displays Linux related advts only.

Manoj
My Enviornment for scientific computing on Linux page : http://Scilinux.freeservers.com


Navigation...

Thu, 01 Mar 2001 17:51:56 -0500
James Coleman (jecoleman from upsala.org)

...I love what you do but how about putting links to the rest of LinuxGazette (or at least the front page) at the tops and bottoms of each Answer Gang article? Thanks!

-- Jim Coleman

Once upon a time this was the case, but folks wrote in, saying that having that main navbar as well as the links within the TAG area was a bit confusing... they often hit "next article" when they meant "next TAG message". But, we can try adding back in only the main Index, and see how that works. -- Heather


Mailto URL error in LG Mailbag (followup)

Fri, 2 Mar 2001 00:25:45 -0800 (PST)
Anthony E. Greene (agreene from pobox.com)

I'm glad you're using mailto's that include the TAG address. The problem is the URL construction was incorrect.

It should be an equals sign (=) after the cc parameter. A quick run through sed should fix this before you get too many messages pointing out the error.

If there's any way to add the subject, you could make the URLs look like this:

mailto:user@domain?cc=linux-questions-only@ssc.com&subject=Re:%20SUBJECT

Note that the subject has to go through a filter that replaces spaces with the hex code "%20" to keep the URL legal. In perl, I'd do this:

$mailto_subject =~ s/ /%20/g;

You can quickly test the functionality of a mailto URL by typing variations directly into the address/URL box of the target browser and see if it calls the mail client as you intended.

Tony

Thanks, Tony, you win the AnswerBubble for the month. I like it when folks not only nail us for a bug (that one was me, I'll go fix the script so it'll be right next time ;> ... but also send us Tip grade material about fixing it. We're trying it this month. I'm sure that our readers will let us know if there's any problems -- Heather


Heather Stern

Tue, 6 Mar 2001 12:15:33 -0500
Michael Gargiullo (gargiullo from home.com)

I just want to say thank you.

Her greeting in The Answer Gang is beautiful. I'm a relative newbie to Linux (It took me a week of reading and playing to compile a kernel that would find my NICs). I understand the open source idea, and love it. I deal in 2 other fields where this idea (in a general sense) applies. Emergency Medical Services where we share ideas, issues, and solutions, not for money, but for the knowledge of helping others. Also in restoring old cars, Things are learned, forgotten, learned again, and most importantly past on. By sharing, we (Linux users) end up with a better app or method.

Again, Thank you

Remember, there are people out there who really appreciate the work that is done.

- Mike Gargiullo


The Answer Gang

Fri, 9 Mar 2001 11:08:38 -0800
Andrew Higgs (ahiggs from ps.co.za)

In Issue 64 you invite Ray Taylor to join The Answer Gang. How would one do that? Is it like a mailing list open to everyone? Can anyone help?

TAG is run like a mailing list in reverse. The public sends in questions, and the subscribers are the answerers. To join, send e-mail to tag-request@ssc.com with "subscribe tag me@mysite.com" in the message body. Then just jump in whenever you have something to say. At the end of the month, Heather selects some of the messages for publishing. -- Mike


GAZETTE MATTERS



SSH article

Tue, 6 Mar 2001 14:55:51 -0800
Bryan Henderson (bryanh from giraffe-data.com)

In the article on ssh, scp, and sftp in the March issue, there is an important area that isn't covered: client/server compatibility.

If you're just doing a basic ssh (to get a remote shell), you're using a standard SSH protocol and any program named "ssh" is likely to work with any remote system that offers a service it calls "ssh."

But scp and sftp are not standard protocols. If you run the scp program from openssh against a remote system that's running an original ssh server, it will not work. (And when I learned this the hard way, it was very hard indeed: the error message isn't "this server doesn't implement this scp protocol." It is, for reasons that took a day of debugging to figure out, "invalid file descriptor"!


Mean Thoughts on the Linux Router Project

Mon, 26 Mar 2001 09:22:56 -0500 (EST)
--Mark-- (mf from agate.net)

First off let me apologize to all the developers or others who I have offended with my views on the Linux Router Project (LRP). By no means did I want to start a flame war. The truth is that I wrote about something outdated. Second, that article was entirely my own; not the work or opinions of Linux Gazette.

Since I wrote "Mean Thoughts" I have received a great many meaningful and insightful messages from LRP users and developers. If I wrote any untrue information I want to know about it. One point of contention for example was whether the ip command is 'nonstandard.' This is purely subjective. If ip really is standard, it should replace ifconfig or route like ipchains replaced ipfwadm.

Nevertheless my views on the LRP have changed. I received such an education that I feel obligated to state for the record I have learned uses for each of the three main LRP distributions, EigerStein (http://lrp.steinkuehler.net/DiskImages/Eiger/EigerStein.htm), Oxygen (http://leaf.sourceforge.net/pub/oxygen), LRP 2.9.8 (http://www.linuxrouter.org) --even in embedded systems. I am not brand-loyal. Advocacy is fine, but fanaticism has got to go. I'll use the best tool for the job, and how I determine what is the 'best tool' is purely subjective. Five years ago I preferred 3Com to any other NIC. Why? Two reasons: The founder of 3Com invented Ethernet, and the cards were recognized by all the OSs that the company used. I knew I would not have to worry about cross platform compatibility. Now I prefer SMC. Why? Mainly because all the OSs recognize them but also because I can jumper-select IRQ & I/O on the models I use.

Would I write another 'anti-Linux' article? Sure. But not one that could potentially insult anyone like when I said, 'developers wasting time'. Linux is merely a product. Windows NT is also a product. Never mind the fact that I despise products from Redmond, Washington: I don't think it's a sin to admit that NT is better than Linux at being a "Domain Controller." It does not change how much I like Linux.

Look at the article and notice its verbosity. It's an opinion, not a review. I did not write it solely for explaining my (i.e., not Linux Gazette == Don't shoot the messenger.) thoughts on the LRP, I also wanted to express and present other information that may be useful to the Linux community, for example the bit on standardization. I did not write it to maliciously annoy anyone. Also to my knowledge there is no technically inaccurate information. I wrote specifically, "I have not done a lot of work/research with LRP incarnation at linuxrouter.org as such but I am familiar with the Materhorn Project." My mistake was that I equated <Linux Router Project> with one flavor, Materhorn.

I may or may not follow with a "Nice Thoughts on the Linux Router Project" article. ;) In any case, I'd like to put all hard feelings aside and hope that anyone who I have offended would do the same.

Sincerely,

Mark Fevola

[Mike] We received several complaints about the article, feeling that it attacked the LRP unfairly. Dave Cinege, the creator of the LRP, was going to write a response addressing the inaccuracies he felt were in the article, but he did not have time to finish his letter by press time. I encourage readers with an interest in routing to follow the links above to the projects' home pages and decide for themselves if the LRP and its offspring are right for them.

Regarding Oxygen, EigerStein and 2.9.8, Dave writes:

They are derivatives of stable releases of LRP, which is currently 2.9.8. I have been creating a new OS similar to LRP for quite some time now. Many things have come out of this are a new multi-packaging system (standard?) that is more powerful then rpm or deb, yet not tied to any specific OS.

Regarding the 'ip' command, he writes:

ip allows you to control the extended routing features of 2.2 and 2.4, IE multiple routing tables. Ifconfig still works for the primary routing table and interface configuration. ip can replace ifconfig, but ifconfig is still the known standard.

A few letters questioned LG's editorial policy in allowing this article to be published. LG's policy is pretty open. If an article is about Linux, contains hard facts or cultural value (e.g., humorous articles, cartoons and articles about Linux VIPs), covers a topic relevant to a significant portion of the readership, is not an advertisement in disguise, and would still be relevant several months from now, we'll probably publish it. There are borderline cases, and this was one of them.

LG does not have a technical review board to screen every article, although I do send a few questionable articles to The Answer Gang for comment. You, our readers, are LG's technical review board, and usually this system works very well. 99% of LG's articles are published without complaint.

In any case, please remember this article describes one person's experience with certain routing programs. It's not meant to be gospel, in spite of the letter I received that said, "But newbies will read it and think it's gospel!" That's not how it works. If you want gospel, read several people's articles and compare them with your own experience.

Another thing this article does is raise the question, just because we can use Linux in a wide variety of routing situations, should we? Are you choosing a Linux router because it's the most appropriate solution for the task, or simply because "we're a Linux-only shop"? Even if the article failed to present LRP in a fair light, these are still questions worth asking.

As always, if you have any comments about an article, whether good or bad, send them to LG and we will forward them to the author.

"Linux Gazette...making Linux just a little more fun!"


News Bytes

Contents:

Selected and formatted by Michael Conry and Mike Orr

Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release.


 Linux 2.4.3

Linux 2.4.3 is out. See the changelog or find a kernel mirror.


 Linux Journal and Embedded Linux Journal

The April issue of Linux Journal is on newsstands now. This issue focuses on Internet/Intranet. Click here to view the table of contents, or here to subscribe. All articles through December 1999 are available for public reading at http://www.linuxjournal.com/lj-issues/mags.html. Recent articles are available on-line for subscribers only at http://interactive.linuxjournal.com/.

The March/April issue of Embedded Linux Journal was mailed to subscribers in February. Click here to view the table of contents. Professionals working in the embedded field in the US, Canada or Mexico can get a free subscription by clicking here. Paid subscriptions to other countries are also available.


Distro News


 Caldera

Caldera Systems have announced the open beta availability of its new OpenLinux server product, code-named "Project 42," and an agreement with Lutris to ship the Enhydra Open Source Java/XML application server with the new version. The product is based on the new Linux 2.4 kernel and targets OEMs and VARs. Project 42 incorporates a secure Web server, a file and print server, and a set of network infrastructure servers, including DHCP, DNS, and firewall.


Caldera and SCO have unveiled Open UNIX 8, incorporating support for Linux applications. Open UNIX 8 will maintain compatibility and continuity with the UnixWare 7 operating system while providing a complete Linux environment. In addition, the product will incorporate support for the execution of unmodified Linux Intel Architecture binaries.


 Debian

Debian has chosen Ben Collins as the new Debian Project Leader (DPL).


 Progeny Debian

Progeny Linux Systems have announced that Release Candidate 1 of Progeny Debian is now available for download. Progeny Debian is based on woody, the current testing version of Debian, and made by a team of leading Debian developers. Company CEO Ian Murdock, has said that he expects any changes after RC1 to be bug-fixes and cosmetic improvements

Features of RC1 include: graphical installation and configuration tools, a GNOME interface for debconf, improved hardware detection and USB support, optional migration to GRUB, and automated multiple installations.

Software included in RC1 includes:

For more information about Progeny Debian, visit www.progeny.com.


 SuSE

SuSE Linux have released apparently very positive news about the future of Linux in Germany. A poll commissioned by SuSE showed that 56 percent of the interviewed PC users have heard of Linux and ten percent already use the alternative operating system at home or at work. This statistic indicates that, in terms of distribution, Linux is second to Windows. Furthermore, 23 percent of the computer users consider switching to Linux when upgrading their equipment. This information was obtained from a survey recently conducted by the market research institute TNS EMNID, Bielefeld, Germany.


SuSE Linux are now offering a new Server Version for professional users. SuSE presented the SuSE Linux Enterprise Server at CeBIT in Hanover, Germany. SuSE Linux Enterprise Server is an operating system streamlined for the utilization in servers. It has been optimized for security and stability and comprises all relevant server services.
SuSE's PowerPC Edition 7.1 will be released in early April. It has k ernel 2.4.2 and ALSA (Advanced Linux Sound Architecture) for PowerMacs. SuSE's administration and configuration tool YaST2 is complemented by YOU (YaST Online Update) for updating individual packages after the install.

It also has KDE2, XFree86 4.0.2 and SaX2, the expanded graphical configuration tool which ensures a simple and secure setup of supported graphics cards, is also a new feature. An improved version of MOL (Mac on Linux), the virtual machine used to start MacOS in Linux, complements the distribution.

The range of supported IBM computers with PowerPC processors has been considerably enlarged. SuSE Linux 7.1 PowerPC Edition now runs on IBM Power3 machines. The possibility to use up to 3 GB RAM and an expanded multi-processor support provided by Kernel 2.4 make SuSE Linux 7.1 PowerPC Edition especially attractive for IBM pSeries 640. Thus, SuSE Linux 7.1 PowerPC Edition is the first Linux solution that supports these computers "out of the box".

The package includes 6 CDs, a 500-page manual and 60 days installation support for EURO 49.00.


News in General


 Lion worm (DNS/BIND security alert!)

Anyone using BIND should be aware that there is a new worm on the loose. The Lion worm attacks certain versions of BIND (the domain name server program). The Sans Institute have plenty of information on the worm, and indicate that Bind versions 8.2, 8.2-P1, 8.2.1, 8.2.2-Px are vulnerable. BIND 8.2.3-REL has been reported as not being vulnerable (this information is preliminary and potentially incomplete). The BIND vulnerability is the TSIG vulnerability that was reported back on January 29, 2001. If you believe your system has been compromised, the SANS Institute has a program Lionfind that detects it. Now is a good time to get the latest version of BIND from your distribution vendor, run named as non-root, or switch to a BIND alternative.

It is also worth looking at general security issues. To get an idea of how security should be done, check out the results of the Honeynet forensic challenge. Candidates downloaded the partition images of a compromised Linux system and had to find out "who, what, when, where, how". The results show how professionals go about doing these things, but also how difficult and time consuming recovering from a compromise can be. The lesson is "BE PREPARED!"


 Upcoming conferences and events

Listings courtesy Linux Journal. See LJ's Events page for the latest goings-on.

LINUX Business Expo
April 2-5, 2001
Chicago, IL
http://www.linuxbusinessexpo.com

Free Web ROI Seminar by Akami Technologies
April 3, 2001
Seattle, WA
http://www.akamai.com/roitime/

Linux Expo, Madrid
April 4-5, 2001
Madrid, Spain
http://www.linuxexpomadrid.com/EN/home

Lugfest IV
April 21-22, 2001
Simi Valley, CA
http://www.lugfest.org

Linux Expo Road Show
April 23-27, 2001
Various Locations
http://www.linux-expo.com

Linux Africa 2001
April 24-26, 2001
Johannesburg, South Africa
http://www.aitecafrica.com

Open Source Development Network Summit
April 30 - May 1, 2001
Austin, TX
http://osdn.com/conferences/handhelds/

Linux for Industrial Applications
3rd Braunschweiger Linux-Tage
May 4-6, 2001
Braunschweig, Germany
http://braunschweiger.linuxtage.de/industrie/

Linux@Work Europe 2001
May 8 - June 15, 2001
Various Locations
http://www.ltt.de/linux_at_work.2001

Linux Expo, São Paulo
May 9-10, 2001
São Paulo, Brazil
http://www.linux-expo.com

SANS 2001
May 13-20, 2001
Baltimore, MD
http://www.sans.org/SANS2001.htm

7th Annual Applied Computing Conference
May 14-17, 2001
Santa Clara, CA
http://www.annatechnology.com/annatech/HomeConf2.asp

Linux Expo, China
May 15-18, 2001
Shanghai, China
http://www.linux-expo.com

SITI International Information Technologies Week
OpenWorld Expo 2001
May 22-25, 2001
Montréal, Canada
http://www.mediapublik.com/en/

Strictly e-Business Solutions Expo
May 23-24, 2001
Minneapolis, MN
http://www.strictlyebusinessexpo.com

Linux Expo, Milan
June 6-7, 2001
Milan, Italy
http://www.linux-expo.com

USENIX Annual Technical Conference
June 25-30, 2001
Boston, MA
http://www.usenix.org/events/usenix01

PC Expo
June 26-29, 2001
New York, NY
www.pcexpo.com

Internet World Summer
July 10-12, 2001
Chicago, IL
http://www.internetworld.com

O'Reilly Open Source Convention
July 23-27, 2001
San Diego, CA
http://conferences.oreilly.com

10th USENIX Security Symposium
August 13-17, 2001
Washington, D.C.
http://www.usenix.org/events/sec01/

HunTEC Technology Expo & Conference
Hosted by Hunstville IEEE
August 17-18, 2001
Huntsville, AL
URL unkown at present

Computerfest
August 25-26, 2001
Dayton, OH
http://www.computerfest.com

LinuxWorld Conference & Expo
August 27-30, 2001
San Francisco, CA
http://www.linuxworldexpo.com

The O'Reilly Peer-to-Peer Conference
September 17-20, 2001
Washington, DC
http://conferences.oreilly.com/p2p/call_fall.html

Linux Lunacy
Co-Produced by Linux Journal and Geek Cruises

Send a Friend LJ and Enter to Win a Cruise!
October 21-28, 2001
Eastern Caribbean
http://www.geekcruises.com

LinuxWorld Conference & Expo
October 30 - November 1, 2001
Frankfurt, Germany
http://www.linuxworldexpo.de/linuxworldexpo/index.html

5th Annual Linux Showcase & Conference
November 6-10, 2001
Oakland, CA
http://www.linuxshowcase.org/

Strictly e-Business Solutions Expo
November 7-8, 2001
Houston, TX
http://www.strictlyebusinessexpo.com

LINUX Business Expo
Co-located with COMDEX
November 12-16, 2001
Las Vegas, NV
http://www.linuxbusinessexpo.com

15th Systems Administration Conference/LISA 2001
December 2-7, 2001
San Diego, CA
http://www.usenix.org/events/lisa2001


 LinuxFocus

LinuxFocus is a Linux webzine that's been around for years, but may not be familiar to some LG readers. Unlike LG, which is essentially in English with some foreign-language translations, LF was founded with the goal of providing non-English speakers with "enough [Linux] information in their native language that they can join in the Linux community." Currently, seven languages are fully supported and four more are partially supported. Translations happen both ways: there are currently six French articles waiting to be adopted by English translators. LG fully supports LF and wishes it success.


 NetworX and AMD Supply Cluster to Boeing

The Boeing Company is using a Linux NetworX cluster powered by 96 AMD Athlon processors. The system, designed as a high performance cluster, is being used by Boeing Space & Communications in Huntington Beach, Calif. to run computational fluid dynamics applications in support of the Delta IV program. Boeing Delta IV engineers tested multiple processor platforms at Linux NetworX facilities prior to buying the cluster, and selected the AMD Athlon for its price and performance advantages.


Linux NetworX has also announced the development of LinuxBIOS for the Alpha platform. In conjunction with the LinuxBIOS Open Source project, Linux NetworX has replaced SRM firmware on the Alpha platform with a Linux-based BIOS. Users will now have the ability to boot to Linux directly out of the ROM on the motherboard.


 Python Software Foundation & Python Cookbook

ActiveState have announced involvement in the launch of a collaborative programming book, the Python Cookbook, with O'Reilly & Associates. The Cookbook will be a repository of reviewed Python recipes that have been contributed by the Python community for the community. It will be freely available for download. For details please go to the website. Activestate will also be a founding co-sponsor of the new Python Software Foundation (PSF). The PSF's is to provide educational, legal and financial resources to the Python community. More information is available in the full press release


 Penguin Computing Selects Arkeia Backup for Linux Servers

Knox Software Corp. have announced that the company has entered into a reseller agreement with Penguin Computing Inc. Under the agreement, Penguin Computing will now offer Knox's flagship network backup application, Arkeia, for bundling with Penguin's pre-configured custom Linux servers.


 IBM, Biotech and Linux

IBM has been very active on the Linux front in recent months. IBM efforts in the emerging biotechnology marketplace received a boost with the announcement that Structural Bioinformatics has chosen DB2 for Linux as its strategic development platform for future applications. DB2 will be used to manage more than two terabytes of high-resolution 3-D protein structures, which are used in the development of new medicines.

For more information on IBM's Linux developments, refer to their website.


 OEone Teams Up with EarthLink

OEone and EarthLink are working together to integrate EarthLink's Linux based Internet access software with OEone's Internet-computer operating environment platform.


 TeamLinux and Muze to Expand Relationship

TeamLinux is to expand its relationship with Muze Inc. to provide hardware support and service for its existing and future kiosk customers. This new multi-year contract provides for TeamLinux to be the premiere provider for all hardware and service. Muze will continue to provide its proprietary software and be the first level of contact for any Muze system issues.


 Sair Linux & GNU Certification

Sair Linux have announced their new website The Web site has information about Sair Linux and GNU Certification. The company provides training, certification, and educational aids (books etc.,). Sair also invites LUG's to sign up and receive a "Welcome Kit" including T-shirts, brochures, information on Sair Linux, Distro CD's.


 Agenda Computing Sell Linux PDA

California-based Agenda Computing are launching a pure Linux PDA (personal digital assistant) to challenge Palm in the war for market share. Each Agenda VR3 and VR3r is loaded with unique software and hardware features like 16MB of Flash memory, which eliminates the problem of data loss associated with RAM-based units. It also supports 7 languages, is e-mail compatible, and will send a memo or message to a printer by wireless infrared transfer.


 Keyspan Ships 4-port USB Serial Adapter for Linux

The Keyspan USB 4-Port Serial Adapter is intended to allow 4 serial devices to be connected to a single USB port. Each of its male DB9 ports allows connection to RS232 serial devices at data rates up to 960 Kbps. In addition to supporting Linux 2.4, the Keyspan USB 4-Port Serial Adapter works with various Windows flavours. (Note, this is not a review, consult the company for full details.)


 Linux Links

Galleo is a mobile multimedia communicator. It's a nifty-looking PDA with e-mail, web and music capabilities. Unfortunately, their web site is not so nifty: I can't get the menu buttons to show. So click on the Galleo image or try this link to their follow this link to get to the products page and use the text links from there, and click on "Virtual Tour". (Requires Javascript and who knows what [Shockwave?] for the movie.)

The Duke of URL has the following to offer:

Some links from the folks at ZDNet's Anchordesk UK

debianhelp.org offers, um, help on Debian.

Linux Valley, an Italian portal for the Linux operating system, has been updated. It offers a range of interactive and community services.

Microsoft says Linux is a threat to intellectual-property rights. Linux Journal disagrees.

Paranoid Backup is designed to "work with cheap tape drives and cheap tapes without shoe-shining or losing data; to never overwrite old backups; and to use as few tapes as possible."

The Pentagon's research agency is preparing to demonstrate a soldier's radio designed to provide mobile communications among individual troops on the battlefield. The network will be based on the Linux operating system. Courtesy Slashdot.

The Linux Expo Birmingham 2001 web site is now online. For information on other Linux-Expo events, consult their website.

OLinux have an interview with Rick Lehrbaum from LinuxDevices.com. OLinux are also currently looking for an investor or a company willing to translate and promote OLinux around the world.

Doug Eubanks has put together a new Linux/RoadRunner help site. He aims to consolidate the various threads in the field.

An article on Microsoft's complicated licensing terms for enterprise users. The title for the Slashdot link is, "Microsoft Turning Screws on Customers".


Software Announcements


 Tom's Root/Boot Updated

Tom Oehser has released a minor, but recommended, update to tomsrtbt. Current version is now 1.7.218. Get it from: http://www.toms.net/rb/. This is something everybody should have on hand in case you someday have to boot from an emergency floppy.


 AbsoluteX Now Available for Download

AbsoluteX, "Linux With a Twist," is available for download at www.absolutex.org. It was unveiled at the Annual Linux Showcase in Atlanta and is now available to developers worldwide under LGPL. AbsoluteX is an X-Window developer toolkit created by Epitera to streamline and facilitate the process of developing customized GUIs for the Linux operating system. Based on the C++ programming language, AbsoluteX is a standard template library (STL) with multiple inheritance methodology, efficient messaging, and programming methods that separate logical and visual aspect class libraries.


 Loki Games

Loki Software has announced an agreement with developer-driven computer and videogame publisher Gathering of Developers to bring the hit PC games Rune and Heavy Metal: F.A.K.K.2 to Linux early this year. Testers are required, register here.

Furthermore, in a race to GPL freedom, Loki Software, Inc. are releasing the latest in their line of open source projects: a complete set of end-user and developer tools for managing software releases.


 "Emerald Isle" Ispell

A new version of the package "ispell-gaeilge" which lets users of International Ispell check their Irish Gaelic spelling has been released recently. Developed by Kevin P. Scannell, a mathematician at Saint Louis University, the package boasts a dictionary of over 200,000 entries covering the many grammatical variations of Irish language words. Mentions of it in Irish national newspapers have introduced many people to the world of Linux for the first time. For people frustrated with the lack of support for minority languages in Windows, projects such as the Irish localisation of Mandrake Linux offer real encouragement. Other projects, such as the spelling checker GaelSpell, are improving the tools available to Windows users, but also help all computer users by providing quality word-lists.


 GARLIC Version 1.1 Released

Version 1.1 of garlic, free GPL molecular visualisation program for Linux and Unix, is available at http://pref.etfos.hr/garlic. It aims to be ANSI and POSIX compliant and may be easily ported to different Unix-like platforms. Garlic may be used to analyse proteins, DNA and other large molecules. The latest version includes a number of plot-options, not available in version 1.0: helical wheel, averaged hydrophobicity, hydrophobic moment, Venn diagram and Ramachandran plot. A screenshot gallery is available.


 The FIEN Group to Sell Teamware Office for Linux in the U.S.

Teamware Group , a Fujitsu subsidiary, and The FIEN Group, a Southern California-based technology consulting organisation have signed a partner agreement according to which The FIEN group will sell Teamware Office 5.3 for Linux groupware to customers across the USA. Teamware Office 5.3 for Linux includes facilities for electronic mail, time and resource scheduling, discussion groups as well as document storage and retrieval. The famous Teamware Office groupware suite has been in the market since 1989 and was ported to Linux platform in spring 2000.


 Opera to be released as ad-ware

Opera Software has announced that the final release of its Linux browser will be available for free to all users. The free version has full functionality but contains banner ads. If you don't want banner ads, you can register your free copy for $39, or buy the adless version for the same price. If you are interested in this product, Opera 5 for Linux beta 7 is now out.


 Open Motif Now Supports Latest Linux 2.4 Kernel Distributions

Integrated Computer Solutions has released an updated version of Open Motif Everywhere. This new release officially incorporates Open Group Patch 3 and Patch 4 into the Open Motif release. These patches include bug fixes and updates to the Motif libraries, clients and the demo source code. RPM (version 4) are also provided for both Red Hat Linux 7, SuSE Linux 7.1 and other distributions using glibc 2.2. The latest ICS Open Motif binary and source packages are available for free download at ICS's Motif Community site, the MotifZone. They are also available for $29.95 on ICS's Open Motif Everywhere distribution CD that can be purchased through the ICS Store.


 Kaspersky Lab Introduces the New Version of Kaspersky Anti-Virus for Linux

Kaspersky Lab have announced the release of the new version of Kaspersky Anti-Virus for Linux (3.0 Build 135.3). This new version adds several features, including installer support for different Linux distributions, and a ready-made solution to integrate centralised virus filtering for Postfix e-mail gateways. The new version of Kaspersky Anti-Virus is available for download from the Kaspersky Lab Web site. All registered users of previous versions of Kaspersky Anti-Virus for Linux may upgrade to the new version free of charge.


 Other software

Rob Pitman has released a LGPL licensed software package that provides a "graphical user interface" between a Java application and an ASCII terminal. The package emulates the API of the Java AWT and the Swing toolkit. It provides "graphical" widgets such as Frames, Dialogs, Labels, TextFields and Buttons. One can design the GUI of an application using any Java IDE and then port it to use a text interface with little work. You can get more information about the package at: http://www.pitman.co.za/projects/charva/index.html.


Mahogany Version 0.62 is out. Mahogany is an OpenSource(TM) cross-platform mail and news client. It supports a range of protocols and standards (POP3, IMAP4, MIME, etc.,), secure communications via SSL, and can be extended using its built-in Python interpreter and loadable modules.

TUXIA specialises in embedded Linux software suites for Internet and Information appliances. TASTE (TUXIA Appliance Synthesis Technology Enabled) is a solution based on Linux Kernel 2.4 with an embedded Mozilla browser and other functionalities, that can be integrated into any hardware platform.


Copyright © 2001, Michael Conry and the Editors of Linux Gazette.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 65 of Linux Gazette, April 2001

"Linux Gazette...making Linux just a little more fun!"


(?) The Answer Gang (!)


By Jim Dennis, Ben Okopnik, Dan Wilder, Breen, Chris, and the Gang, the Editors of Linux Gazette... and You!
Send questions (or interesting answers) to linux-questions-only@ssc.com

There is no guarantee that your questions here will ever be answered. You can be published anonymously - just let us know!


Contents:

¶: Greetings From Heather Stern
(?)? --or--
What's this word?
(?)Unable to Install Linux
(?)DNS and telnet
(?)Help on LILO stopping at LI
(?)How can you do a recursive search to find broken symbolic links?
(?)BIOS passwords - Bane of my existance
(!)telecommunication in a hospital --or--
Making the Connection
(!)Setup of Microsoft Outlook Express 5 for Sending of Clear Text
(?)icons
(?)Corrupt Tar Archive
(?)masquerade in sendmail is broken.
(?)neighbour table overflow
(?)VIDEO CARD
(?)fat versus inodes
(?)Installing Linux without cdrom
(!)Installing RedHat 7.0 and a driver for the Chipset Cirrus CL-GD5436
(!)cd-writing mini-howto
(?)These has been bugging me for a while now --or--
Reading the logs
(?)Linux Newbie Frustration --or--
So many users, So few POP accounts
(?)script
(?)Linux Box on windows
(?)HD bad clusters --or--
Take a Breath!
(?)about the adaptation.
(?)Changing the "login-sequence" in Linux?
(?)Linux, X, Dell Video Card
(?)sendmail
(?)about a stubborn mount error
(?)Here is a very stupid question ... --or--
How do I choose?
(?)I was wondering

(¶) Greetings from Heather Stern

It's that stormy month of the year again, when people expect us to be silly in print.
I feel silly for saying this but it seems like we have to every month:
  1. There is no guarantee that questions will ever be answered, especially if not related to Linux.
  2. HTML attachments drive us nuts...
ICANN expressed a desire to make a foolish mess of the entire internet. I wrote "An Open Letter to ICANN" which has been published in Linux Journal recently: http://www.linuxjournal.com/articles/conversations/0022.html
While we're thinking of messes, and Easter coming up, how about cute fluffy bunnies? I cleaned up the ol' home office a bit. I think Dust Puppy (http://www.userfriendly.org) can find a girlfriend named Dust Bunny if he tries hard enough.
As you hopefully know by now this is a Linux magazine and we normally only answer Linux questions. But, it's the silly month, so once again with that cardboard box thread that snuck in ...
And finally, something I've been messing with that makes us all continue to look foolish for using Linux. How can we call ourselves a desktop system when all the word processors suck? Oh yeah. We don't. We just call it an operating system, apps are for distros. Well, they still need to work on it.
The first thing you may wonder is why would Ms. My Box Is More Productive Without Producticity Software even care, anyway? Well, it so happens that a friend of mine, who isn't computer oriented in the slightest, wanted a resumé and of course since we're close, she asks me. No problem, I think. It's just an rpm -i or an apt-get install away. Right.

For a more positive view, see Tony's telecommunications article this issue.

Don't believe me, eh? Well let's start at the top. WordPerfect is time bomb ware. Their idea of "for personal use" includes dying at 90 days so you have to go get a registration key, allegedly free. In my old shareware days I always avoided timebombs. You never know if they might also try to take your documents with them or something. It's a shame because I always liked their DOS software. I may buy it someday, when I need it for myself, after all, with my consulting biz I guess I don't count as personal use anyway. But I resist - my principles don't call for supporting time bombs. Grr.
I tried StarOffice a few months ago. It shows many of the worst features of having originally been a port from the windows version via some translation library. Its "everything lives inside the Staroffice Window" mold was one of the GUI features I was glad to get away from when I left Windows behind, and its printer configuration is evil and broken. Okay, when it finally works it's rather cool to have numerous Avery papers selectable in the dialog so you can do labels and index cards. But, it's actually easier to set up a printer with plain old lpr and magicfilter. Yuck.
Applix might be okay. I dunno, I was in a hurry, and wanted something a bit smaller. I guess I just hate the idea that I have to download a whole suite just to get one part.
I think I have LyX installed, I try that. I do. It doesn't do a number of things that need doing. I tried to do spring margins and it has its own ideas how wide to make the table. This will never work.
The SIAG people have loose parts. Their word processor is called Pathetic Writer. I tried it... and they're right. If I recall correctly Wordpad has more features. Sigh.
How about Abiword? Those abisuite guys have their head on straight, let's try it. So happens Terry already has it on his box since we put Progeny on his desk. He tries to use it for one page reports and growls at it because it can't deal with tabs very well. Hmmm... anyway, just an ssh session over there and access it via X, right? Wrong! It whines that a font is missing. That's insane. Betel has the most complete font collection in the house, since it's setup to be our TTF font server...
Fine. Install it locally. (I have to get the whole suite. Oh well. Get a soda, come back.) One SuSE style rpm i coming up! (wave magic wand) uh, this doesn't load at all, even to pop up with the complaint. No error message in the xterm window, nothing. Fume.
Well, let's try the K office then. Kword coming up. Installs sweet enough. Even runs. (Yay!) Can't do tables even though it has buttons for it. Now, we are talking about everyone's favorite use for spring margins, putting the dates of your last employ all the way to the right, and since almost nothing has proper spring margins, can't do it without tables. At least it does those long beautiful bars, which I had figured would need tables. Even when I use just plain white space to push things to the end, the thing is iffy about whether they show up over there. If I change the font anywhere on the line its metrics are a scramble and things fall off entirely.
On the bright side, its preview feature generates very clean Postscript, not yet encapsulated. So, being the programmer type that I am, I let Kword do what it could, and improved the rest in text mode, previewing directly in ghostview.
One shouldn't have to be a programmer to whip together a friend's job hunting paperwork. It takes us back to the old days, when a CP/M box could be a decent terminal for a brighter Postscript printer, if you slipped it a sneaky enough program.
Oddly enough if I had just thrown it together in HTML it would have been pretty quick. But that would have been in a plain old text editor too -- since the state of the art in WYSIWYG editors for HTML is about the same. Bluefish and August seem to have them beat all over the place. I think I like Bluefish better, it has a feel very similar to HTMLedPro which I used when I used to live more closely with that other operating system.
If the Dot Com Fallout has made your company foolishly let you go, at least the Linux world has room for you. You can check out Linux Journal's Career Center (http://www.linuxjournal.com/employ), Geekfinder (http://www.geekfinder.com), the Sysadmin's Guild (SAGE) Job Center (http://www.usenix.org/sage/jobs/sage-jobs.html), or pay attention to your local area papers for when major high tech Job Fairs are in your area, so you can go to them. There are also some really generic job sites like Dice.Com (http://www.dice.com) or MonsterBoard (http://www.monsterboard.com). If you hate the corporate mold, check out some of the project offers at SourceXchange (http://www.sourcexchange.com) or Collab.Net (http://www.collab.net). Or put up your consulting shingle by listing yourself at Linuxports (http://www.linuxports.com) and getting listed into a few search engines.
Me, I don't have to worry about getting into search engines, do I? ;D Have a happy April!

(?) What's this word?

From dana gillen

Answered By Jonathan Markevich, Chris Gianakopoulos, Breen Mullins, Huibert Alblas, Heather Stern

(?) Can you tell me what PCMICIA stands for? Thanks!

(!) [Jonathan] Sure. "People Can't Memorize Computer Industry Acronyms". (You have an extra "I" in there)
Seriously, I believe it's "Personal Computer Memory Card Inter...   ..." Uh, I forget the "A", thus proving the previous statement.
However, they've since ditched the obscure acronym and now call it "PC Card", since "Memory" was very rarely what it was about.
(!) [Chris G.] The "A", I believe, stands for association.
(!) [Breen] PCMCIA = Personal Computer Memory Card International Association
A good resource for ATDA (All Those Dratted Acronyms) is the Babel File:
http://www.geocities.com/ikind_babel/babel/babel.html
HTH.
(!) [Huibert] Hi, my first post to Tag (here it comes :)
In Gnome is a litle utility called Gdict (Foot->utilities->Gdict) If you go to Settings->preferences->Server->Database you can select V.E.R.A. (Virual Entity of Relevant Acronyms) It's great for looking up Acronyms (Accept the 'wrong' or funny ones are not listed, is there a place those are listed)
Hope I could help
(!) [Heather] Yes, funny ones that have been gathered into hacker lore can be found in The Jargon File:
http://www.tuxedo.org/jargon
...although, oddly enough, this one isn't in there. We'll have to fix that! :D

(?) Unable to Install Linux

From N P

Answered By Ben Okopnik

see attached equipment list

(?) I am trying to install RH 6.2 on the WD 12.3GB drive. However, it hangs during the installation (after the partitions are formatted and progress dialogs starts).

(!) [Ben] If you're using the graphical install, I suggest that you do not. All "freezing" problems that I've had with RedHat installations happened with GUI-based installations; all of them were resolved by going to the text-based one.

(?) Well not quite true. If i don't select any packages (no X, compilers, multimedia, etc) to install, it installs fine.

(!) [Ben] The program that installs RedHat is a huge, complicated thing that goes into weird contortions once in a while. If you simply cannot manage to install a full system by using it, install the basic system and whatever packages are necessary to dial up and do FTP (those should actually be a part of the basic system, but I'm not certain), and download a copy of 'rpmfind' <http://www.rpmfind.net/linux/rpmfind/rpmfind.html>;. This program will connect to an "RPM server" and download whatever packages you specify, automatically resolving dependencies in the process. It's a not-quite-as- powerful knockoff of Debian's 'apt' tool, but is actually reasonably mature and useful.
Another option is to try installing another distro; I'm a real Debian zealot, myself. One of the many reasons that I really like it is that something like the above procedure is already one of the standard installation options: the base system install takes 5-10 minutes, you tell 'apt' which of the many available servers you want to use, and walk away. 'apt' can use FTP, HTTP, local CDs, or packages right off the HD - and you can mix-and-match sources however you like. Dependency problems? What are those? <grin>

(?) When the installation hangs there is no response to any keypress and it doesn't hang at the same part of the installation i.e. at the beginning (just starting), middle, or end (seconds to go).

(!) [Ben] Hm. Have you run a good memory test on the machine? There are plenty of tools available, but my favorites are the old DOS "burn-in" tool and Linux's "memtest86" (interestingly enough, "memtest86" doesn't require Linux: it is a bootable image that can be run from a floppy!) "memtest86" is a part of the "hwtools" package, at least under Debian. Run either one of them for a minimum of 24 hours.

(?) DNS and telnet

From crabe

Answered By Mike Orr

Hi, How do you get telnet working on your own machine as referred to in the DNS HOWTO, i.e telnetting at 127.0.0.1 ? I got telnet working to reach my ISP but never got around telnetting 127.0.0.1. So I gave up DNS. I have looked around all the HOWTOs available, and perhaps it's too simple for mentionning. I am running LinuxPPC2000. Thanks for any answer.

(!) [Mike] Are you trying to do a standard telnet ("telnet 127.0.0.1") or telnet to another port ("telnet 127.0.0.1 53" would be a DNS query)? Here are a few possibilities:
  1. Nobody is listening on the telnet port. If so, you'll get an immediate "connection refused" error. Telnetd is normally started from inetd. Uncomment the telnet line in /etc/inetd.conf and "killall -HUP inetd".
  2. Your loopback devide is not configured. What happens when you run "ping 127.0.0.1"? If you get no response, do "ifconfig". There should be a stanza for device "lo". If not, run "ifconfig lo 127.0.0.1" and/or "ifconfig 127.0.0.1 up". (If you're still running kernel 2.0.x, follow that with "route add -net 127.0.0.0"). Then look at your network startup scripts to see why it isn't being activated by default.
  3. Inetd runs telnet through a tcpd wrapper for security, and you're failing the tcpd check. This would cause the connection to do nothing (at least nothing visible) and then disconnect after a couple seconds. See "man tcpd" and "man 5 hosts_access".
  4. You are telnetting to port 53 and your nameserver is not running. If so, you'd get a "connection refused" error. If you installed named (bind), find out why it isn't running.
The TAG security hawks will send a follow-up if I don't also mention that telnet is a security risk bla bla bla because it doesn't encrypt your password or your data. Think twice before running telnetd, and think a third time before allowing tcpd to allow telnet connections from outside your local network.

(?) Help on LILO stopping at LI

From Alessio Frenquelli

Answered By Heather Stern

Hello,
I start thanking you for any help ... I am stuck at this stage, I am not a GURU on LINUX and I cannot overcome the problem.

Therefore I cannot really point to what has been changed or went wrong.

Under the Internet I found many, many errors entries pointing to LILO not being able to load in a disk that is above the 1024 cylinders

(!) [Heather] Yes, it used to be LILO's biggest bug, though not its loudest (that one is people using it wrong and then wailing what are they going to do now that their MBR is mangled).
But ever since the new version those are old messages. The normal solution until it came out, was to create a tiny /boot near the beginning of the free space - even most dual booters could manage to slip a 20 Mb partition below the boundary. This works because on the kernel and bootmap needs to be below the line; once the kernel is loaded you are no longer working with real mode BIOS issues at all, you are fully in protected mode and can access everything the kernel is built for.

(?) In my case LILO always worked so far, and I did not surely changed the disk size, done any repartitioning under Windows NT nor under Linux.

Under Windows, I have run Program => PartitionMagic =>PartitionInfo and I am attaching the output of the command to this email in case you need to see in details my machine's partitioning.

(!) [] So you normally use partition magic for your dual boot menu, I'm guessing. If so, that is what is presently in your MBR, and would be overwritten (***warning warning danger will robinson*** or at least look real carefully that you've set up stanzas for NT also first!) if you change boot = /dev/hda

ENVIRONMENT

Dual bootable Laptop, Toshiba Tecra 8100; one partition is Windows NT workstation, the other is Linux RedHat 6.1.

PROBLEM

Linux 6.1 LILO does not longer boot properly. Just stop at the word "LI".

SOME TROUBLESHOOTING INFO

Fortunately I have the boot diskette, and booting from it, I can successfully get to Linux.

When running : << /sbin/lilo >> I got messages:

[root@afrenquelli /etc]# lilo
Warning: device 0x0305 exceeds 1024 cylinder limit
Warning: device 0x0305 exceeds 1024 cylinder limit
Warning: device 0x0305 exceeds 1024 cylinder limit
Warning: device 0x0305 exceeds 1024 cylinder limit
Added linux *
[root@afrenquelli /etc]#
(!) [Heather] You used to get this before, or you now have a bigger disk than you used to?
By the way, Redhat 6.1 is a bit old, and lilo itself was updated last year so that 1024 cylinder issues are not a problem for it. (You'll also want to keep up to date on RH security updates, not quite as drastic as upgrading the system entirely.)
With the newer version, you can add the keyword

LBA32
into the top of your /etc/lilo.conf and it would use a different method to know where things are on the disk.
Being a boot loader, it's critical for lilo to know precisely where the kernel resides on your drive. Moving your kernel file (even if you then moved it back) or your system maps is a good reason to run /sbin/lilo.

(?) Some machine's characteristics:

[root@afrenquelli /tmp]# df -k
Filesystem           1k-blocks      Used Available Use% Mounted on
/dev/hda7              1510032   1201268    232056  84% 	/
/dev/hda5                23302      2648     19451  12% 	/boot
(!) [Heather] No relation. Lilo puts its bits into the Master Boot Record, which is not shown here. If ordered to, it could also use the superblock but, that is also not shown here, as it's reserved space for the filesystem driver.
(?)[root@afrenquelli /etc]# lilo
Warning: device 0x0305 exceeds 1024 cylinder limit.
   Use of the 'lba32' option may help on newer (EDD BIOS) systems.
Fatal: sector 19926490 too large for linear mode (try 'lba32' instead)

------------------------------------- file /etc/lilo.conf contains

boot=/dev/hda7
(!) [Heather] This says put it in the superblock of the 7 partition ... your /
Most people would have it in the MBR ... /dev/hda with no number. Do you have an NT boot menu pointing you into Linux? Because I also notice that you don't have a chain loader stanza, to ask the item below to offer you your NT boot setup.
(?)
map=/boot/map
install=/boot/boot.b
prompt
timeout=50
vga=791
(!) [Heather] a nice framebuffer text mode :)
(?)
linear
(!) [Heather] "this is a big disk"
(?)
default=linux
image=/boot/vmlinuz-2.2.12-20
        label=linux
        initrd=/boot/initrd-2.2.12-20.img
        read-only
        root=/dev/hda7
(!) [Heather] The stock redhat kernel, I see.

(?) I did not "fiddle" with Linux at all before this error appeared !

(!) [Heather] WHat do you normally use Linux for?

(?) So , I changed in /etc/lilo.conf the value "linear" with "lba32", and then /sbin/lilo runs fine with :

c[root@afrenquelli /etc]# lilo -v
LILO version 21.6-1, Copyright (C) 1992-1998 Werner Almesberger
Linux Real Mode Interface library Copyright (C) 1998 Josh Vanderhoof
Development beyond version 21 Copyright (C) 1999-2000 John Coffman
Released 16-Dec-2000 and compiled at 17:04:30 on Jan  9 2001.

Reading boot sector from /dev/hda7
Merging with /boot/boot.b
Boot image: /boot/vmlinuz-2.2.12-20
Mapping RAM disk /boot/initrd-2.2.12-20.img
Added linux *
/boot/boot.0307 exists - no backup copy made.
Writing boot sector.

At this stage I "REALLY" hoped that the problem went away, but I still get only "LI" at boot time, I can only use the boot diskette to get into Linux.

(?) ATTEMPTS TO SOLVE IT ===================

Thinking that the latest version of LILO "could" have fixed this problem, I have downloaded LILO 21.6.1-1 from http://rpmfind.net/linux/RPM/contrib/libc6/i386/lilo-21.6.1-1.i386.html

I have then upgraded my LILO with : "rpm -Uhv <nomefile>.rpm"

(!) [Heather] Good deal! Yay!
You should also keep up to date on RedHat security updates for RH 6.1. (Not directly related to this, just a good idea)
Any other recent installs or upgrades?

(?) The upgrade completed fine, and then when I try to run /sbin/lilo I got:

[root@afrenquelli /etc]# lilo
Warning: device 0x0305 exceeds 1024 cylinder limit.
   Use of the 'lba32' option may help on newer (EDD BIOS) systems.
Fatal: sector 19926490 too large for linear mode (try 'lba32' instead)
(!) [Heather] Even better, I'm glad they provide useful error messages ... that tell you what to do about an error.

(?) So , I changed in /etc/lilo.conf the value "linear" with "lba32", and then /sbin/lilo runs fine with :

c[root@afrenquelli /etc]# lilo -v
LILO version 21.6-1, Copyright (C) 1992-1998 Werner Almesberger
Linux Real Mode Interface library Copyright (C) 1998 Josh Vanderhoof
Development beyond version 21 Copyright (C) 1999-2000 John Coffman
Released 16-Dec-2000 and compiled at 17:04:30 on Jan  9 2001.

Reading boot sector from /dev/hda7
Merging with /boot/boot.b
Boot image: /boot/vmlinuz-2.2.12-20
Mapping RAM disk /boot/initrd-2.2.12-20.img
Added linux *
/boot/boot.0307 exists - no backup copy made.
Writing boot sector.

At this stage I "REALLY" hoped that the problem went away, but I still get only "LI" at boot time, I can only use the boot diskette to get into Linux.

(!) [Heather] That's weird :(
Maybe you need to take the linear or lba32 mark out ??

(?) WHAT'S NEXT ?

If I could avoid to rebuild the LINUX partition would be GREATLY appreciated, since I am not a Linux expert and I would need some guidance also lots of other software is installed and I would like to avoid to reinstall the all lot !

(!) [Heather] Although I'd like to note at this time that making sure your backups of your personal data on both OS setups are in current state, and good working condition... and not stored on the same disk. Tapes or a CD-R or a stack of ZIP cartridges maybe.

(?) Could I just de-install LILO and re-install LILO ?

(!) [Heather] should be able to. lilo -u to put the backup bits from /boot/boot.0307 back into the superblock /dev/hda7 (where it hopefully came from). If it whines abotu timestamps, lilo -U insists.
Then, you should be able to run lilo again to install it as a fresher instance.

(?) Or should I add something into the /etc/lilo.conf and try to run "lilo" again ?

Could I somehow just rebuild the "booting" portion of the Linux, and if so, could you please provide detailed instructions on how to do it ?

(!) [Heather] lilo has actually very excellent documentation that comes with it... much finer than many application packages in fact. If you run 'locate lilo' a bunch of things will scroll by, and several be from the doc directory tree somewhere on your disk. I'm guessing /usr/doc/lilo- (some version number) but I'm not near a RedHat system right now to look. Anyways on my SuSE system it's in /usr/doc/packages/lilo ... I have some dvi files (readable or printable by LaTeX tools) and some compressed postscript files (but ghostview ... the command gv ... is glad to show these to me).
The README, however, is the right place to start, because unlike most readme files around here (which can be summarized "so this is the foo program, I created it because bla bla. It's under the GPL/artistic/whatever license, see COPYING. If there are any bugs (hope not) get in contact with me at ...") it has some serious data in it. Consider it your quickstart guide to a working LILO.
You've already done a number of the obvious things, so, let us know if the uninstall/reinstall trick works, and if that readme isn't helpful to you we may be able to translate it to plainer english.

(?) Kind regards,
alessio

(!) [Heather] Our best hopes for your system, Alessio. We'd really like to hear back what fixed it, if you manage to solve it.
(!) [Ben] Ah - an opportunity to shill for one of my scripts! :)
Here's where my "doc" script would come in really handy: all you'd have to do is type something like "doc lil", and it would give you a numbered list of all the subdirectories in your "doc" directory that start with 'lil'. Typing that number enters the directory and shows you a numbered file list; typing one of those numbers displays the file, no matter what its format is. When you exit the viewer, it shows you the list again, and gives you a chance to "fish around" in subdirectories and other files.
I know I sent it into LG as a 2-cent tip quite a while ago, but I believe I've made a few improvements since then, so here it is:

See attached doc.bash.txt


(!) Heather,
thank you for your complete and prompt reply.

I will provide here some answers from the queries that you had. At this stage it seems that the problem is caused by one of our product, that I have recently installed under Linux. After reporting the LI problem to our support team, I received a reply where they state that this product ast times seems to affect LILO, and to cause the problem that I have described to you.

I am still awaiting from our support team, if they know how to fix the problem of LILO.

And now to your questions:

(!) [Heather] Any recent changes? The next L should represent finding the second stage loader so the kernel can get going. These could be disk-ish things like you caught a virus and successfully cleaned it out, had a hard crash and needed to reboot, etc

(?) Yes, I have installed a new Micromuse Package under LINUX. After the installation I rebooted few times without problem, but then one day, LI started to happen.

 Warning: device 0x0305 exceeds 1024 cylinder limit
(!) [Heather] You used to get this before, or you now have a bigger disk than you used to?

(?) No changes on disk size, or disk partitioning under Linux or under NT. I never run /sbin/lilo before , so I cannot tell you if this was a problem. But LILO always worked before.

I did not "fiddle" with Linux at all before this error appeared !

(!) [Heather] WHat do you normally use Linux for?

(?) I use Linux to run Micromuse products, and as I said, the problem started to happen since I installed one of our package. Other people installed it on their laptop, but they did not report the error. Support told me that at times this product is know to cause some problem with LILO, but they are not too sure.

(!) [Heather] That's weird :(
Maybe you need to take the linear or lba32 mark *out* ??

(?) You mean, remove linear or lba32 from lilo.conf and try to run lilo again ? I will try this.

(!) [Heather] Then, you should be able to run lilo again to install it as a fresher instance.

(?) So I should just run :

  1. lilo -u or lilo -U
  2. Then how do I re-install LILO ? Just running /sbin/lilo or do I need to download the LILO package from somewhere and the install it with "rpm -Uhv .rpm" or something similar ?

Thank you for your help once again, I will keep you posted !
ciao, alessio


(!) Heather,

here is how I got this problem fixed with the help of the support personnel in our company

Thank you once again for your support !!!

Support adviced me that the following recovery procedure for LILO problems is to be used for Laptop that have dual boot partition, NT & Linux, when the menu received at boot time shows OS LOADER, and then it presents 2 choice , Windows NT or Linux.

Steps taken:

  1. Boot Linux from bootdisk
  2. Run /sbin/lilo -v (make sure no errors are displayed)
  3. Insert a new diskette and mount it:

    mount -t msdos /dev/fd0 /mnt/floppy
  4. df -k gives me 2 filesystems:

    /dev/hda7 Mounted /
    /dev/hda5 Mounted /boot
  5. run command :

    dd if=/dev/hda7 of=/tmp/bootsect.lnx bs=512 count=1
  6. cp /tmp/bootsect.lnx /mnt/floopy
  7. shutdown and reboot under NT
  8. copy c:\bootsect.lnx c:\bootsect.lnx_old
  9. copy a:\bootsect.lnx c:\bootsect.lnx
  10. reboot under Linux

PROBLEM solved, LILO loads correctly.

Ciao, alessio


(?) How can you do a recursive search to find broken symbolic links?

From bandido

Answered By Ben Okopnik, Faber Fedor, Mike Orr

(?) I found the odd broken link after a few upgrades, and was wondering how can I hunt down any other such beasties, 'ls' doesn't have any suitable way to delimit, and poking about in man pages for find etc made me quite nautious.

(!) [Faber] Perhaps you should take some Dramamine. :-) The man pages are you friend. If you do a "man find" and then type "/link" (that will do a search on the word link) you'll find all kinds of references to the work "link" (the owrk "link" will be highlighted). Scroll down a couple of pages and you'll find the "type" option.
So, to find all the links on your system, you would type

find / -type l
Simple, no? :-)
(!) [Ben] No. The querent was asking how to find broken links, not all links.
What's needed here is the "symlinks" program, written by Mark Lord. It will find and classify all the links, hard and soft, in the filesystem. If you want to see all the dangling (i.e., broken) links on your system, just type

symlinks -r / | grep ^dangling # Recursive search starting from /
If you want to delete all the broken ones, just enter

symlinks -dr / # Recurse and delete broken links starting from /
For me personally, this wouldn't work too well. I use dangling links as placeholders; as an example, I've disabled NFS during the boot procedure by "breaking" the symlink in "/etc/rc2.d":

S19nfs-common -> ../init.d/nfs-common # Original link

S19nfs-common -> ../init.d/nfs-commonXXX # Dangling!
If I should need to restore NFS, a 5-second fix will do it, without having to figure out what directory the link should go into, where in the process it should load (as determined by the number after the 'S'), or where it should point.

(?) Thank you Ben, and others, a google search found "symlinks", although it only appears to be available for Debian.

(!) [Ben] You could use the "alien" utility to convert it, or simply go to Debian's page for "symlinks" -
<http://packages.debian.org/stable/utils/symlinks.html>;
They always provide a link to the tarball from which the package was made, and you can compile it yourself. <grin> I like Debian. A lot.
(!) [Mike] Or, if you don't have the symlinks program available:
( find / -type l | xargs file ) | grep 'broken symbolic link'

(?) The symlinnks prog worked a charm, indeed ot cleaned up everything nicely, changing absolute to relative links too, lovely.

The real issue, is my rampant stupidity, since after downloading symlinks, lo and behold, I discovered it is part of Mandrake 7.2 which I use.

I had pissed about pouring over man pages trying to find out how to delimit a search to find the buggers, only to discover my salvation was close at hand.

I have received several TAG replies, and I must say the 1st was within 45 minutes. Astounding :)

(!) [Heather] I'll say it's astounding. Some people don't get answers for weeks... if at all...

(?) Keep it up guys.

-- Merp!


(?) BIOS passwords - Bane of my existance

From Unidentified Querent

Answered By Ben Okopnik, Heather Stern

Can I send the Answer Gang a question and ask that I not be identified? PLEASE??? Reason: I feel stoopid enough already. Hey, you may decide that it isn't even a good idea to print this one. I doubt I would...

(!) [Ben] You really have no reason to feel stupid. Not knowing something does not equate to being stupid; as I tell the students in my classes, "educated is what you're supposed to be when you come out, not when you come in." However, I don't think that there will be any problem with honoring your request: Heather, the TAG's Answer Gal, scrubs off the e-mail addresses anyway, and I've already removed your name.
(!) [Heather] Yes. I can strip anybody who wants down to anonymous and I already make a sincere effort to scrub company references, etc from most things. Someotimes it matters (like when someone is a spokesperson for the company of a product we're talking about) but usually it is cruft and gets cut.
As Editor Gal I can make sure this thread is scrubbed thorughly of your identity, and will.

(?) It's really TWO questions but the second question is not necessary if you have an answer to the first (which I doubt).

(1) I read your "LILO:Password Protected Entries" article in the new March LinuxGazette. Though I do not have a LILO question, I'd like to ask you to follow up on something else you touched on in that article.

One of my toys is a CTX EzBook 800 laptop which is currently running SuSE 7. A while back, I thought it would be a good idea to block access to "Lorraine's" BIOS settings. I set the BIOS password so that access to the BIOS is blocked but booting is not. Good thing. I soon forgot the password.

(!) [Heather] Ut oh. This is below the scope Linux can probably help with, but read on.

(?) This isn't a HUGE problem since I don't have to access the BIOS very often but booting from a CDROM is now impossible (without using a boot floppy) and setting or correcting local time is a real pain in the rump (see question 2).

(!) [Heather] have you tried setting Linux' date and then:
hwclock --systohc

(?) I know BIOS backdoors exist but I've been unable to find one for mine.

(!) [Heather] /dev/nvram maaaaybe. Unfortunately it's laced with some righteous warnings and most people use it by figuring out what to do with it when they have normal BIOS access.

(?) Lorraine's got a PhoenixBIOS 4.0 Release 6.0.67A dated 1985 - 1997. In the year or so since I got stoopid, I've scoured the Internet for info on what the Phoenix backdoor might be - I found nothing. I even contacted the manufacturer, CTX, to see if they would help. All they would suggest was popping open the laptop and removing the BIOS battery, something I'm not sure I'd do even if I knew how to (yeah, I know I'm a wimp).

(!) [Heather] Opening the laptop may be tricky. The usual rule is There Are Lots Of Tiny Screw To Get Lost. Taking notes and not making sudden moves while it's half open (so you can see where plastic traces are plugged in before yanking them loose carefully) are both good.
But the BIOS is usually a watch battery and about as easy to deal with as a watch once you have it.
You may want to get printouts and take notes during bootup of things that are BIOS options as far as you know them. dmesg may help some.

(?) So... The questions remain: Do you know how to foil this sucker or, failing that, can you

(2) Tell me how to reset the BIOS time from within SuSE 7? That'd be a piece of cake with RedHat's linuxconf but I've yet to find anything in SuSE that would do the trick. Don't even ask about yast and yast2... Change time zone, yeah. Change time, no way.

(!) [Heather] Ah yes, this would be the hwclock command I gave above. You have to be root to use it.
As for linuxconf, err, I haven't had good luck with it myself. YaST (yess, that's really how the command is spelled) is the admin tool under SuSE, but as you can see, it's really more about installing stuff, not so much for sysadmin work.
(!) [Ben] Take a look at the 'cmostool' utility <http://www.ibiblio.org/pub/Linux/hardware/?M=A>;. It allows you to back up, modify, hex dump, etc. the CMOS - as well as deleting the whole thing (which wipes out the password.)
!!! WARNING WARNING WARNING !!!
Do not do this if you don't know what you're doing! Wiping your CMOS will make your system unbootable. You must know at least the CHS (cylinder-head-sector) values for your hard drive, and either know or be able to figure out the other necessary settings. If you dump your CMOS and get stuck, you are on your own!
Now that I've scared you into twitching fits and heebie-jeebies...
Most BIOSs today are auto-configuring, and will either auto-detect or give you the option of auto-detecting your HD; Phoenix BIOS certainly does that (it's been my favorite for many years now.) For myself, if I'm going to do that sort of thing - and I've worked on many, many machines where the owner had set a BIOS password and forgot it - I'll boot DOS, save a copy of the settings to a bootable floppy via 'savecmos', and only then blow away the password via 'cmosedit'. That way, if things go truly awry, I can at least get back to where I was and try something else. The 'savecmos' utility (including 'cmosedit') is available all over the Net, e.g. <http://members.tripod.co.uk/paulc/cmosutil.zip>;.

(?) P.S.: I bought this laptop new from Sears (don't laugh) and have the receipt and everything. Honest!

(!) [Ben] I will be certain to stop by and check up on you. Have them ready, and be afraid. Be very afraid. :)
(!) [Heather] Cool. Makes it lots easier to insure and all. It's entirely a side note, but http://www.mobilix.org has a nice list of laptop resources.

(?) Thanks!!!
Signed: Stoopid


(!) Making the Connection

By Anthony E. Greene

Somewhere in the shuffle the original querent's message has been lost, but basically, they asked about connecting their hospital together, so that the doctors could communicate with ER and ICU, staff could access suitable records or charts, etc. The doctors are not dumb people, but they already have a specialty and a job to do, so it has to be a pretty clean setup.

You could setup a PPP server and use the modems to make dialup PPP connections. This would allow you to use graphical network applications such as browsers, FTP clients, and network file managers such as GNOME's GMC.

Distributions and Packages

Without knowing more about what resources you have available, I cannot make specific recommendations. Red Hat, Mandrake, Slackware, Debian, SuSe, and Caldera all come with the tools you'll need to setup a network. I have not used Corel but I've read that they left some server and development packages out. That may be fine for home desktops, but in a business environment I'd want a distribution that includes everything I might need and lets me choose what to leave out or disable. You will need some server packages to implement a solution and you will want development packages available in case you need some tools that are not available in a package.

Data Entry

First you need to figure out what applications will be used for data entry. Eventually, you may find you need a database application, but it sounds like what you need right now is something that generates documents that can be shared. If the results are to be typed out as free text, a text editor is probably the best way to go. The text editors that ship with GNOME (gedit) and KDE (kedit) are both adequate, but something like Nedit has fewer bugs and more power. If you need to use templates for data entry, you could either create some read-only files as templates or create templates in a StarOffice for use with its word processor.
For something with a little more familiarity to GUI users, AbiWord can edit plain text, RTF, and simple DOC files. It has a toolbar that any Word user could use with no problem and is fairly lightweight. AbiWord is part of GNOME Office and ships with the Ximian (Helixcode) desktop.
There are some Open Source medical applications available. Try searching for them at Freshmeat <http://www.freshmeat.net/>;, Sourceforge <http://www.sourceforge.net/>;, and Google <http://www.google.com/>;.

Desktop Applications

If you really need an integrated solution for Linux desktops at a minimal cost, StarOffice is a good choice. The latest version (5.2) is still a serious memory and resource hog and takes time to startup. But once it's running, its speed is reasonable, considering its large feature set.
I haven't used Applixware, but it is supposed to be very usable and programmable. The latter may prove useful to you if you plan to use it for data entry. Applixware is not free, but is a lot less expensive than MS Office.
For intranet browsing and email, I still recommend Netscape Communicator 4.7x. It works fairly well, is stable on non-Java pages, and supports LDAP and HTML mail. These last two features are very useful in an organizational mail client. Netscape 6.x does not support LDAP and StarMail's LDAP interface is too difficult to be useful. An LDAP server is not too hard to setup for small organizations and is great for maintaining an organizational address book.

Sharing/Publishing

After the data is entered, you will need to make it sharable. I suggest each department have a directory that only they can write to and any authorized use can read. Setting up these groups and permissions is not too complicated, but is more than I want to cover here.
The key thing about sharing is deciding what protocols you will use to share. Client applications for FTP and HTTP are easy to use. Both servers are easy to install. But the permissions scheme for HTTP is separate from the system user and groups settings. That makes it complicated to setup if you have multiple groups of users that need different permissions. So I don't recommend using Apache and HTTP to share the documents.
You can use FTP, but the WuFTPd server that has shipped with many distributions is an almost constant source of security problems. Just make sure you disable anonymous logins if you choose to use FTP. Web browsers are great FTP clients because they can launch external applications to view documents. The only real problem with FTP is that passwords are sent over the network unencrypted. On a small, closed network this should not be a problem.
This is probably more than you expected, but it's just enough to get you started. Running a network will mean learning a lot at first, but it should run well after it's setup.

(!) Setup of Microsoft Outlook Express 5 for Sending of Clear Text

Answered By Chris Gianakopoulos

[Heather] We get so many people who send us perfectly good questions, in HTML, which drives some of our mailers crazy. It's not surprising that someone with a crippled Linux box would reach for a nearby Windows system to send the mail. So, here's some help for you. Utterly self serving, to help us get plaintext :)

(!) [Chris] Hey Heather,
I glanced at the March 2001 Linux Gazette and noticed your (subtle) request for the steps needed to set up my Outlook Express mailer to send clear text. I read in a textbook (circuits) that a promise made is a debt unpaid. I will recoin the phrase to "A request made is a response unpaid". Therefore, I will post you a response which attempts to provide a coherent step of steps to achieve our goal.
I actually executed these steps while typing the steps into a text file using vi for DOS. I use my Microsoft machine when I email late nights. I hope that this is coherent! Anybody can sanity check me, of course (that's what teams do -- review each other's work). Here are the steps.

Steps for Setting Up Microsoft Outlook Express 5 for Sending of Clear Text

  1. Start up Microsoft Outlook Express.
  2. From the "Tools" menu, select "Options".
  3. An Options dialog box will pop up.
  4. On the Options dialog box, select the "Send" tab.
  5. Under the "Mail Sending Format" section of the dialog box, select the "Plain Text" radio button.
  6. Press the "Plain Text Settings" button.
  7. A Plain Text Settings dialog box will appear.
  8. For message format, I select the "MIME" radio button.
  9. Check the "Indent the original text with" check box. This will cause any included original message to be indented and preceeded with a ">" sign.
  10. . Select "Automatically wrap text at" with 74 characters. (Ben Okopnik's suggestion to me).
  11. . Press the OK button of the Plain Text Settings dialog box.
  12. . I make sure that the "Reply to messages using the format in which they were sent" check box is unchecked.
  13. Press the OK button of the Options dialog box.
You're all set!
-------------------- End of Instructions ----------------------
The line lengths of the steps look short because I typed those steps into a text file, using vi, and I always keep lines less than 80 characters. I'm from the old days of using terminals (not ASR-33 teletypes although once I had a General Electric Terminet 300 TTY for a printer), so I avoid line wrap.
Thank you and Ben for the encourgement that you give. I'm still a cross between a soon to be Linux hacker and an embedded software hacker (they call me an engineer, but, I think that is questionable).
You are a kick-a## teem! Keep up the good work!
Chris G.

(?) icons

From Joseph Ibbitson

Answered By Thomas Adam

Hello Gang

I'm an old guy (81) trying to learn Linux with very little computer experience. Strangly, with so much help available online I find it easier to learn Linux than Windows. However, one problem I have is just what does the various icons indicate when left clicking on the file tree. I see gears, folders, screens with and without locks, apparently sheets of paper, some with corners folded over, cubes of assorted colours, etc. etc. I am running Mandrake 7.2. My main problem is that I cannot find any instruction on how to navigate the file system when I don't know anything! I have yet to find a book that explains the very basics. Example-how do I find the proper way to install software. I have installed ,I beleive ,Sane, see it listed but how do I arrange it so I can Use it?

I hope you will exscuse this rambling requests. I really want to master Linux but until I can get over the basics I am having trouble. Any help you can give me will be sincerely appreciated. Thank You.

-- Joseph

(!) [Thomas] Hi....
Judging from your description, I assume that you are using KDE. The icons that you see, are supposed (although I admit, I have trouble with this) to help you understand what KFM (the KFile Manager) is doing.
Cog wheels indicate that the program is executable folders indicate just that, that they are folders screens usually indicate that the program is a script of sorts. Try clicking on it once and opening it in a text editor such as "kwrite" or "kedit"
In really basic term, the Linux File system, has various components to it....
the root of the file system "/" holds folders such as:
etc
home
usr
rootmnt
"etc" holds most of the initialisation scripts that loads as linux is booting (i.e. the output from the kernel)
"home" is the folder which stores the users work that is on the system
"root" is the folder where all of "roots" work is stored. Root is the system admin of a linux computer and has read\write permissions on every file. In other words root controls everything.
"mnt" holds the symbolic links to other partitions on your local machine
"usr" is the folder which stores main executables, man pages, etc.
Using your file manager as before replace what is already entered at the top, and put /usr/bin
here you'll find a lot of cog wheel icons. This is the main folder which will store all your programs.
Since you are using Linux-Mandrake (as do I) installing software is often done by using RPM's (RedHat Package Manager). To install these, insert your CD and at the console, type:

cd /mnt/cdrom/Mandrake/RPMS
then type:

ls
and you'll see a huge long list. To install any RPM (regardless of the path..folder that it is stored in) type:

rpm -i nameofrpm-1.0-0mdk.i586.rpm
and that should install it (assuming there are no module dependencies!)
I know this must seem very vague and confusing, but I believe I have started you off....

(?) Corrupt Tar Archive

From Mohamed Ezz

Answered By Ben Okopnik

I have 'ftp'ed an 8MB tar archive file and cannot untar it. I did not do a checksum after the ftp because I know ftp should do this on its own. When I run: $ tar xvf myfile.tar I get:

tar: This does not look like a tar archive
tar: Skipping to next header
tar: 447 garbage bytes ignored at end of archive
tar: Error exit delayed from previous errors

My problem is I lost the files of which the archive is composed of, so I can't regenerate it. To make things worse, the archive file on the source machine (from where I did the ftp) was deleted. So the local archive is my only hope of retrieving my files.

Any help is greatly appreciated.

Ezz

(!) [Ben] There might actually be some hope here! If an archive is tarred and gzipped, the above is exactly the error that will be returned when you try to untar it without un-gzipping. Try this:
tar xvzf myfile.tar
Note that the file really should have been called "myfile.tgz", if that's what it turns out to be.

(?) Hello Ben, That was it! Thank you so much. Excuse my ignorance about the extension. Ezz

(!) [Ben] No worries at all, Mohamed; glad I could help.
(!) [Mike] Actually, it makes us happy to learn a problem has been fixed. Thanks for letting us know.

(?) masquerade in sendmail is broken.

From Clark Ashton Smith

Answered By Ben Okopnik

(?) In issue 21 there was an article by the "The Answer Guy" which explained how to use the masquerade feature in a local sendmail configuration.

(!) [Ben] Issue 21 is from 1997. Considering how much Linux has changed since those days, and how closely "sendmail" is tied into all those changes, relying on information that old is not going to get you good results.
On the other hand, the information that is available for setting up sendmail is generally pretty poor, and not at all intended for the casual user; it's pretty nightmarish out there.

(?) It worked fine with Redhat 5.2 Linux, but I just tried it on Redhat 6.0 and the FEATURE(nodns) reports that it is a no-op and I should use the service.switch file to disable dns lookup. Well after 5 hours of reading sendmail faqs, newsgroups and tips I am no closer to making this work.

I have a simple network with a ppp connection to the internet. Many folks out there must have similar setups.

Could someone please show us how to get the masquerade feature working again?

(!) [Ben] Well, you could look at my article in issue 58, called "Configuring Sendmail in RedHat 6.2"; this might be a bit more up to date, and tells you how to do masquerading. Given the situation that you're in, though, you might want to try installing "masqmail" - all the features you need in your situation, made to work well with a masquerading setup, works with multiple ISPs, and it is much less complex than "sendmail".
If you set it up and it does what you need, you owe the Oracle an article on your experience. <grin> Just kidding. Hope this helps.
(!) [Heather] Actually, we really could use the article :) I've tried masqmail and it looked okay, but, its minimal documentation seems to assume that you'll be using masqdialer to drive your dialup connection. I never quite got around to spending the time to make it deal with changing, but non-dialed connections (such as laptops often encounter). A little AnswerGang message or even article about your experience setting it up properly without that assumption, would be really handy. Alternatively, now you also know about masqdial too, and it may make dialing into your ISPs easier.

(?) neighbour table overflow

From Berg Alexander

Answered By Heather Stern

hi,

i have the problem that i have a running terminal-server (booting over net via Root-NFS) system. now we want to add a second subnet to the server, and all should be okay in the config files. BOOTP is working, TFTP is working but the client is not able to mount the root-fs, with the error message "neighbour table overflow"... we also have changed the nfs-server, no luck...

bye

Alexander Berg

(!) [Heather] The message 'neighbor table overflow' is not about your NFS, it's at a lower layer than that.
It means that your arp cache is overflowing because your machine can't tell who is on its own subnet... its neighbors. Which usually means your localhost setup is broken (because lots of applications use networking internal to your machine - which is always on its own subnet, so those packets should never even escape the computer) or, far less commonly, that your netmask for your own external address is wrong.
Sadly, tftp and network booting are things I'm not so familiar with, so perhaps one amongst the rest of The Answer Gang can help tell you where to correct your terminals' localhost setup.
Because this happened when you're adding a new subnet, you may find a need to set up machines on both subnets with ethernet aliases. When properly set up then running ifconfig should result in something like this:
eth0      Link encap:Ethernet  HWaddr so:me:he:xv:al:ue
          inet addr:192.168.129.15  Bcast:192.168.129.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5939693 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5971444 errors:0 dropped:0 overruns:0 carrier:0
          collisions:8308 txqueuelen:100
          Interrupt:10 Base address:0xff00

eth0:1    Link encap:Ethernet  HWaddr di:ft:he:xv:al:ue
          inet addr:192.168.64.2  Bcast:192.168.64.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:10 Base address:0xff00

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:3924  Metric:1
          RX packets:320906 errors:0 dropped:0 overruns:0 frame:0
          TX packets:320906 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
Best of luck!

note, Alexander put antispam hooks in his address when mailing us, so he never saw his emailed response. We still hope this helps him and others with the dreaded Neighbour Overflow.


(?) VIDEO CARD

From Tracy

Answered By Heather Stern

DEAR ANSWER GUY:

Greetings.

Do you know who is making two-monitor cards for 98SE?

Thanks.

(!) [Jonathan] Oooh, this one has "Heather" written all over it. Let's stand back.
(!) [Heather] Nope. Our focus is on Linux, so we don't think much about windows (except sharing files with it)... it is possible to use multiple monitors with linux, as long as you have such a card, and it's something supported by X version 4. Before we got that folks had to buy a well tuned commercial X server (think video driver) called "Metro-X" (http://www.metrolink.com).
So, as a matter of hardware, you might not need one card that does two monitors at a time; some video cards are okay with having two of the same kind in the computer, but then you need a video driver that will survive the experience. Some past linux'er described his troubles doing this at (http://www.tarball.net/docs/multihead_fb_howto.html) -
The keyword you probably want to try in the search engines is "dual monitors" or "multi-headed" video cards.
Perhaps a visit to one of the windows magazine sites would be more useful. With luck they might have had an article on the topic in the last few months so you could get some reviewer comments about which are any good in your platform.

(?) ENTHUSIASTICALLY YOURS, tracy

(!) [Heather] Best of luck and feel free to give us a buzz if you have any linux questions.

(!) DEAR HEATHER,

Thanks. Matrox answered too and it appears that Matrox has a card (G450) with two outlets which allows two monitors to operate as one big one.

ENTHUSIASTICALLY YOURS,

tracy


(?) fat versus inodes

From narender

Answered By Heather Stern

dear sir ,

i want to know why the viruses are so common in dos and windows while unix is ammune to these ?

(!) [Heather] In order to spread effectively, viruses have to gain system-level privileges and abuse them. In DOS and Windows, system level priveleges have no "natural" defenses - all requests for system services are on behalf of the same user, you.
NT has slightly better natural defenses, but also gets some interesting ones.
The ability of viruses to spread seems to be enhanced by some other features which you would otherwise find handy, like the ability of several apps to share a single macro language.
This is why there are so many antivirus companies - even after they've gone and bought each other up a bunch. They're in the business of selling immune systems and the ability to spot that the machine is "ill" before the symptoms get obvious.

(?) is it all due to inodes concept in the unix ?

(!) [Heather] No. UNIX family OS' all expect different applications to run in seperated memory spaces, called a process. If a process (even owned by the same user) tries to wander out of its allowed space it is killed (that's called a sementation violation, or segfault). In addition normal users don't have full system privileges. Beyond that, we have a great many macro languages available and few systems have the same configuration enough that a virus can be sure of one or another feature being present. Having to make decisions makes such "invaders" large - larger invaders are more easily spotted, or may set off other defenses. So while in theory it's not impossible for a Linux virus to exist, it's much harder.
The main case I know of was basically a research virus - it could only spread if the system's user also did a few things to improve his ability to access the system as root when working remotely. Very few people do that, or even want to.
We have much more to fear from crackers trying to generate these failures deliberately, than from viruses trying to invade our systems automatically.
However, it's worth noting that LILO is a master boot record - it looks different, but it's still an mbr, so any virus you catch in a dual boot system that attacks the mbr, will attack your LILO. That it's code "coming from Linux" won't save it. It does have a few defenses, but it's not very big. Many other bootloaders exist too, and if you're living in a virus rich environment you might want to use one that specifically has some antivirus features.

(?) if so will you please tell me in more detail the responcible differences between fat and inodes tables ?

needing yr help
regards
narender

(!) [Heather] Well, it's not the responsible thing, but it's a fair question.
FAT is a table at the beginning of the disk, which divides the disk up into "clusters" and marks how each cluster is used. (There's actually two tables, so that there is a safe copy in case of problems, but normally, they contain the exact same data.)
inodes contain a small amount of information (called metadata) about the things they point to, and the things they point to can be put anywhere on the disk, because part of the metadata says where that is. We have a different way of keeping track of what disk space is still free to allocate. For more about this, study about the "superblock" since we do have things that affect how many inodes we can use, and so on, as options when we format a disk under linux.
So it is simplest to say that the difference is that FAT directly represents the disk, but Linux' system indirectly represents the disk.

(?) Installing Linux without cdrom

From Berry Vos

Answered By Ben Okopnik

Hello,

The other day I bought a second-hand computer. I already had a laptop-computer with SuSE linux installed, which is running perfectly. I bought the new pc to install linux to experiment a bit without messing up my primary computer in case I do something wrong. The problem is that the new pc has no cdrom. Also, neither the new pc, nor my laptop has a network card. I wonder if it is possible to connect the laptop with the pc with a null-modem cable and install linux off of the laptop on the pc. Is this possible? If so, how?

I hope you can help me.

(!) [Ben] The process is thoroughly described on the last page of the PPP-HOWTO under "Using PPP across a null modem (direct serial) connection". You will need an actual null modem cable ("LapLink cable"), detailed in the Hardware Book (see the "hwb" package), in "ca_Nullmodem9to9.html". Don't just buy a regular "9-to-9 serial cable" from Radio Shack; it won't work. A lot of what are called "null modem cables" won't either. Buy a LapLink, or build one; if you can handle a soldering iron, it takes about 15 minutes.
The one thing you want to be aware of is that, at best, you'll be pumping the data across at just over 11kB/s. To put that in perspective, copying the installation CD to your laptop (this is not necessary for the installation, but it'll give you an idea) will take approximately 16 hours. I've done a Debian installation by pumping across the base install files (24MB, about 40 minutes) and running them, then setting up 'apt' to FTP the individual files from the source machine via the serial link. I let it run overnight; by the time I woke up the next morning, it was all done.
Not that this is a big deal - it wasn't to me, anyway - but you want to be aware of the time scale involved. <grin> I do love my serial link. It lets me get out of buying a PCMCIA NIC for a laptop that's on its last legs anyway, and 99% of the files are small enough that speed isn't really an issue.

(!) Installing RedHat 7.0 and a driver for the Chipset Cirrus CL-GD5436

Answer By Wilf

Hello and Good'ay,
Installing RedHat 7.0 was nothing special though programmes seem to run a bit faster than under RedHat 6.0... but:
Althoug RH 7.0 does recognize the graphics chipset Cirrus CL-GD5436 (on a Compaq Deskpro 2000) it does not provide the correct driver as RH 6.0 does (though after a long and despairing search I found the information that the driver was no longer included in "XF86_SVGA-3.3.6-33.rpm" shipped with RH 7.0). The mouse pointer under X (I use WindowMaker) was displayed as a barrecode found on products, windows overlapped and were redrawn in a rather chaotical and randomly manner. Other Window Managers (KDE or GNOME) did not display it any better.
Falling back to RH 6.0 or coping with this particular problem? I did both, but used some (perhaps not necessary force) to install RH 7.0 all the same.
I first installed RH 7.0 and configured graphics display with default values. After installing and BEFORE calling up a graphical display with "startx" I force-installed the driver from RH 6.0 with:
rpm -i XF86_SVGA-3.3.3.1-49.rpm --force
and -much to my delight and would ya believe it, matey?- all possible mode of graphical displays (640x480, 800x600 and 1024x768, all 16bits colour) now work as expected: very good indeed!
In the event that this procedure could have been done otherwise, I'd appreciate a comment.
Yours Linuxely,
Wilf
P.S. I take it that section "2 cent tips" rather means "2 US &cent; Tips" or are you talking EURO &cent;? Mind you, it's just a thought ... ;-)
(!) [Heather] Well, the usage comes from an idiom for "putting in our two cents worth" -- cheerfully offered advice, not always wanted, sometimes useful. So, if you have a local idiom that expresses the same concept but uses a different chunk of change, then you know the exchange rate ;P Our friends in the UK can surely use Tuppence Tips.
Forcing the old series rpm into place in this fashion is just the right thing here. You should be safely able to apply X parts from 6.2 ... XFree86-3.3.6-20.i386.rpm ... as an upgrade (-U) to the 3.3.3 you successfully installed, if you like.

(!) cd-writing mini-howto

Answer By Chris Coyle

I found the "CD-Writing with an ATAPI CDR Mini-HOWTO" (http://www.linuxgazette.com/issue57/stoddard.html) very helpful. Thank you.
Here are a couple of suggestions which other readers who are interested in the same subject may find useful.
  1. (very minor) I think it should be /etc/modules.conf not /etc/conf.modules
  2. I just discovered that the ide-scsi module in kernel 2.2.17 (from RH rpms I just DL'ed), either has a big problem, or else it is significantly incompatible with previous kernels.
Here's what happened:
I DL'ed and installed the RH rpms for kernel 2.2.17-14. These were recommended in a security advisory. I kept my previous kernel (2.2.16) installed just in case, adding a new section to /etc/lilo.conf by copying the previous one, mutatis mutandis. Then I ran lilo and rebooted. At first everything appeared to be OK with the new kernel, but then I tried to mount my cd-rom and it failed, giving the message
mount: wrong fs type, bad option, bad superblock
on /dev/cdrom or too many mounted filesystems
While I was searching for the cause of this I remembered that I had set up my cd-recorder to use ide-scsi.
My regular cd-rom reader is hdc and the cd-recorder is hdd. Following the directions in your mini-howto, I had inserted
append="hdd=ide-scsi"
and this I had copied faithfully into the new 2.2.17 section. When I removed it and rebooted, I found I could mount the cd-rom again. Then I put the line back in and rebooted.
This time I looked at what scsi devices were detected. Eureka! By looking at dmesg and also by using "cdrecord -scanbus" I discovered that the ide-scsi module had taken over both hdc and hdd, even though I requested only hdd. I asked for help on comp.os.linux.misc and within hours someone else confirmed the same thing, namely
"...if you have two devices on an IDE channel, and one of them is under ide-scsi emulation, it's better to treat both of them as if they were under ide-scsi emulation.
I don't know if this is due to a an error or a design change, but the work-around was quite straightforward.
The only tricky bit was that I wanted to be able to boot 2.2.16 so I had to devise a way to make both kernels boot up in a state where they could use the same devices and configuration files. My solution is as follows:
  1. Change the lines in /etc/lilo.conf to
    append="hdc=ide-scsi hdd=ide-scsi"
    in both kernel sections.
  2. Move the /dev/cdrom link from hdc to scd0.
  3. Change the scsi configuration for the cd-recorder in /etc/cdrecord.conf to 0,1,0 (since it is now the second scsi host).
After all that I am finally back to the point where I can mount the cd-rom and use the cd-recorder, with either 2.2.16 or 2.2.17 kernel.

(?) Reading the logs

From Andrew

Answered By Heather Stern

Hello Mr Answer Guy,

While i'm here i'm going to get my 2 cents worth &, so throw a few questions at you ( hehe that's funny since you offer your knowledge for nicks). I'll get in now before you decide to go commercial 8^)..

(!) [Heather] Some of us are consultants, for those who enjoy directly working with a linux guru, or to get guaranteed an answer of some sort - TAG gets a lot more mail than anybody can really answer, and complicated or non linux things often get ignored.

(?) Running Redhat 6.1 1./ 1st thing is as soon as i decide to start logging Kernel logs to /var/log/kernel via syslog.conf i get the following

Mar 28 14:20:12 echelon kernel: klogd 1.3-3, log source = /proc/kmsg started.
Mar 28 14:20:12 echelon kernel: Inspecting /boot/System.map-2.2.12-20
Mar 28 14:20:12 echelon kernel: Loaded 6865 symbols from /boot/System.map-2.2.12-20.
Mar 28 14:20:12 echelon kernel: Symbols match kernel version 2.2.12.
Mar 28 14:20:12 echelon kernel: Loaded 168 symbols from 12 modules.
(!) [Heather] That part's normal...
(?)
Mar 28 14:20:12 echelon kernel: VFS: Disk change detected on device ide1(22,64)
Mar 28 14:20:44 echelon last message repeated 17 times
Mar 28 14:21:46 echelon last message repeated 31 times
Mar 28 14:22:47 echelon last message repeated 30 times
Mar 28 14:23:49 echelon last message repeated 31 times
Mar 28 14:24:51 echelon last message repeated 31 times
Mar 28 14:25:52 echelon last message repeated 30 times

(What does this mean???)

(!) [Heather] Uh, that it's gone crazy thinking there's a disk change when there's not. Ide1 is your second IDE chain, so maybe your CDrom, or an ls-120 bay.
Removable media bays have either optical or mechanical sensors to detect that new media has arrived ... enough dust particles can screw up either one.

(?) I have included my syslog.conf . Do you have any idea how i can stop this ocurring?? I thought it had something to do with having multiple things pointing to the same place

(!) [Heather] Well, if you have two devices on your second IDE chain, check that they aren't both set to master, or both set to slave, in their jumpers. It's only a guess but if the BIOS let them get this far in such state, the kernel could be confused who was talking, and have assumed it was a disk change.
But I'd do a shutdown and try a clean air cannister anyway, it doesn't hurt. Don't forget to cover your mouth, there are usually a lot more dust bunnies than I expect when I do this.

(?) 2./ Should I be concerned with this . I get it continually in my logs


Mar 28 12:01:02 echelon sendmail[25388]: f2S212W25388: forward /home/Users/andrew/.forward.eziekiel: World writable directory
Mar 28 12:01:02 echelon sendmail[25388]: f2S212W25388: forward /home/Users/andrew/.forward: World writable directory

I mean obviously if i am to receive mail this would need to be writable from ,as it says the world. I am right in thinking that aren't I ??

(!) [Heather] No, what this is saying is, since your home directory /home/Users/andrew turns out to be world writable, anybody else who ever logged into your system could change your .forward. That's a security problem, some utter stranger could get your mail, and the kind folks at sendmail got tired of people claiming that such lossages (whether pranks or malicious) were some sort of bug in sendmail. So, it checks.
You should either fix your home from being world writable (after all, your other stuff is vulnerable too) or, you can set the DONT_BLAME_SENDMAIL feature in sendmail, and it will stop checking for silly things like these. And your own fault if it breaks wickedly because of weird permissions.

(?) There are so many questions I have when it comes to Linux.

3./ When I shut down X I might see these errors. They don't mean that

much but I would love to know how to fix then . These are found in .xsession-errors


xscreensaver-command: no screensaver is running on display :0.0
Xlib: connection to ":0.0" refused by server
Xlib: Client is not authorized to connect to Server
xscreensaver: Can't open display: :0
xscreensaver: initial effective uid/gid was root/root (0/0)
xscreensaver: running as nobody/nobody (99/99)
rm: cannot remove `/root/.gnome//gmc-aoiM8A': No such file or directory
subshell.c: couldn't get terminal settings: Inappropriate ioctl for device
(!) [Heather] When you shut down X numerous things will lose their server connections. If the xscreensaver stuff is happening during startup of X you probably have to fix your .Xauthority or something.
rm not being able to remove absent files, that's not a bug, it's just being noisy.
Usually apps that use ioctls recover from ioctl glitches, since ioctls are so "close to the bare metal" they behave differently on a lot of systems.

(?) 4./ When I start a ppp session via ifup ppp0 I get the following

command not found but then I kicks in anyhow & dials up without problem.
Wish I could fix that strange one

(!) [Heather] Your chatscript probably tells it to run an apps which is not installed on your system. The ppp documentation is hug, but most of the control files are plain text under /etc/ppp or /etc/chatscripts

(?) 5./ I think snort is a great program but it still throws some false alarms I constantly see info I don't need to like the following

(!) [Heather] Well, I don't use snort so I can't explain its stuff.

(?) Then the like of this error


Mar 27 01:15:20 echelon pam_console[11450]: can't find device or X11 socket to examine for 1.

Can you suggest a book that gets away from the obvious within Linux & helps with questions that aren't as common like the last one for example..

(!) [Heather] X however, uses a special breed of networking internal to your box, called "UNIX domain sockets". So that's the kind of socket it's talking about looking for. What sort of examination it wanted to do I still can't say.

(?) Thankyou

Andrew

(!) [Heather] Hope that helped. There are lots of Linux books, but I'm used to recommending towards a less technical crowd. Some linux-y things you were asking about above are not very linux specific, so good UNIX books can help too.
Jim Dennis wrote a nice book "Linux System Administration" from New Riders, but it's more an explanation of planning and things to do in being a daily sysadmin, not "how to read syslogs". Mr. Sobell's "Hands On Linux" is good for getting people to swimming level in the Linux icy seas, but again, it's more about doing things, and less about logs reading.
Not that I'm trying to discourgae you! If more sysadmins cared a bit what the messages their logs contain really mean, I think many systems would be healthier. I just don't know a book that's the kind of reference you're thinking of.

(?) Hello Heather,

Wow you were right on the money with these kernel errors. I have just added a removable harddrive to this computer so i'll look into the jumper setting..Thanx

The one i'm not to sure about though is the sendmail part. My permisions for lets say my account/user directory is as follows

drwxr-xr-x   28 andrew   users        4096 Mar 29 12:52 andrew

What permissions would you suggest here & for my other users ???

Thanks agian

Andrew

(!) [Heather] Your home directory looks okay, maybe you should see if any directories further up the chain are world writable.
The really security conscious person might have one group per user, and reserve use of the group named "users" that contains normal accounts, for things for all the people to use, so that they can avoid world writable directories at all. Unfortunately directories and files can only belong to one group at a time. And it's a little odd to make your home world readable too, but not uncommon, and in a private system, not so much of a big deal.

(?) Hello Heather,

Just a quick message to again say thankyou very much for your prompt email reply. Un fortunately my friends & collegues are more windows based so i cant call on to many people for help when Linux hiccups..

Being able to ask people like you these strange types of questions help sooo much

Cheers
Andrew


(?) So many users, So few POP accounts

From Thomas Nyman

Answered By Mike Orr

(?) I have recently entered the magical world of Linux (Red Hat 7.0). In the You see I would like to configure my linux machine so that it polls a couple of pop accounts via a dialup ISP, and the distributes any mail to users on the local network. In my mind a reasonable request. I understand that sendmail and fetchmail can be used in this respect (although sendmail "sends" mail and does not collect it). I have so far been inable to find out exactly what I need to configure (besides fetchmail) to do this. I have also tried to configure sendmail to no avail, it keeps complaining that I have not set a que and have not set a mailbox...but try finding a how-to that tells you how to setup a que and a mailbox locally - I cant do it. I have also tried to instal qmail. I've downloaded a tar.gz file. Unpacked it with gunzip and the run the tar -xvf comman on it. SO fall all looks fine. I have then followed the install instructions and goes well untill I reach the part of the install instruktions that instructs me to "make setup check", many attempts have been made but qmail simply will not understand the instruction..hence I cant continue the installation....ah but I digress...the point is I want to collect popmail from different pop accounts and distribute it to either eudora och outlookexpress on windows machines..can I do this..and if so whaich programs do I need to configure and how???

(!) [Mike] Fetchmail works by popping the mail down, changing the envelope-to address and passing it on to the local mail-transfer program for final delivery. So the first step is to get a working mail-transfer program. This can be sendmail, qmail, exim, postfix, smail, etc.
The next step is to set up your .fetchmailrc. Assuming all the mail from each pop account is going to a single user, you can use a configuration like this:
poll pop.my-isp.net
	proto pop3
	user bob there with password XXXXX is bobby here

poll pop.my-other-isp.net
	proto pop3
	user frederick there with password YYYYY is fritz here
Now, each time fetchmail runs, bobby and fritz will find their pop mail in their Unix mailbox. You would then need to make that mailbox visible to Eudora or Outlook Express somehow, but that's another issue.
If your mail transport agent seems to be working but popped mail is still being lost, use fetchmail's -v flag to determine whether fetchmail is generating the correct recipient address and whether the mail transfer agent is accepting the message.
If you wish to distribute mail from a single pop account to several Unix accounts, it's more complicated. You could have fetchmail deliver it all to a single account which then uses procmail to distribute it (e.g., according to a special prefix in the subject). Or you could use uucp instead of pop/fetchmail. Uucp was designed for the "my site has multiple users but I only have one ISP account" problem, but pop was not. Pop was designed assuming each user would have their own mailbox at the ISP. However, finding an ISP that supports uucp nowadays is difficult, they may want a higher price for it, the configuration would be more complicated, and it would probably work best if you had your own domain.

(?) script

From Paul Wilts

Answered By Ben Okopnik

(?) Hello there, I am hoping you can help me out. I am writing a script. I have a file that has two columns. one column with numbers and one column with names. This file stores users disk-usage/user-name ie: 50000 paul. I would like to run a script/command that would look into the file and if a user is over a certain number , that number/numbers along with the user/users name is copied from that file and put into a different file. I have tried almost everything I know, which is limited, but have not had any success. Thank you for your help

(!) [Ben] Well, you don't say what it is that you tried, or what language the script is in, but I'll take a flyer in a "bash" one-liner. If we have a file called "quotas" that looks like this:
5       joe
7       jim
12      jack
10      jeff
20      jose
1       jerry
3       jenny
8       jamal
6       jude
and we want only those users whose numbers exceed, say, 7, then we might do something like this:

while read a b; do [ $a -gt 7 ] && echo $a $b; done < quotas
What we've done here is read in each line and load the strings into two variables, $a and $b. We then check to see if $a is greater than our target number, and echo both of them if it is.
Note that the whitespace between the names and the numbers is ignored by 'read'; I only put it in to demonstrate how clever "bash" is about stuff like that. :)
You could also do it in Perl -

perl -wane 'print if $F[0] > 7' quotas
- split each line into an array, print if the 1st member of the array (arrays are indexed starting from 0) is greater than the target.
That should save lots of wear and tear on your fingers. :)

(?) Thanks very much for your help. Yes I was using Bash. I tried using the test and expr. What would you suggest for a good web site that would also be a good reference for information on scripts. Once again thanks.

(!) [Ben] Heh. I might have a suggestion.
A while ago, I wrote a 6-part series right here in Linux Gazette called "Introduction to Shell Scripting". It's been translated into 7 languages, and is used in at least two college courses. It was intended as a basic text - don't expect to be introduced into The Deepest Mysteries - but I believe that it's a very good start for anyone trying to learn shell scripting, and should get you up to basic competence in short order.
Take a look at LG issues 52-55 and 57-58 or <a href="http://www.linuxgazette.com/search.html">search for my last name (Okopnik)</a>, since one of the articles got misnamed in the e-mail shuffle.

(?) Linux Box on windows

From Uri Rado

Answered By Heather Stern, Breen Mullins

How can I install a linux windows on windows?????
Thanks!!!!!!!!

Uri Rado.

(!) [Heather] Hmm, such a simple question, so many ways to interpret it.

[about adding Linux to an existing Windows install]

(!) [Breen] WinME apparently added a new and bizarre way of reporting cylinder numbers on large drives (the physical cylinder modulo 1024 or some such) which confused the dickens out of Parted. I don't know if the fix is in the latest released version but it has been reported on the parted list.
Make very very sure that you're using the latest version of whatever tool you're using if you've got WinME anywhere near your box.

(?) Thanks!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Uri Rado.


(?) Take a Breath!

From Wolf

Answered By Jim Dennis, Heather Stern

Answer Guy,

I just recently read that message from Rik, having trouble with some bad clusters under Windows. Anyway, I used to run a Win98 system, and I experienced the same problems, even if they might not be related to Rik's (just bad hardware or mis-partitioning, I assume). I had my HD partitioned into one primary and one extended partition, with approximately 20 gig each (HD is a Samsung 40.8 gig). Then, I used Partition Commander to make 4 out of the primary partition: One FAT32 (14 gig), and 3 FAT16 (2 gig each); and on the extended partition I put a 12 gig FAT32 and 8 gig for Linux, pre-formatted with Partition Commander. Now I know, that Linux doesn't like anything above 1024 cyl., but I always assumed, that it's translated in such a way, that only 1024 are reported (or am I wrong?). Installing Linux on that last partition was a bold failure. First, I never got it to boot from the HD, no matter if I used Loadlin or (tried) LILO, in the MBR or on the partition. When it booted after the CD install, fsck found a load of errors, and they all seemed to be beyond 32 gig. So I deleted the last partition and reformatted it as FAT32, which seemed to succeed without errors; even with bad clusters checking on. Letting a disk utility have it's way with it later, revealed again a bunch of bad clusters, and again. above the 32 gig limit. Not sure, if that's an OS or a hard error? Right now, I'm going to low-level format and re-partition the drive, then assign Linux all the space (this version doesn't like partitions picked from the middle of the drive). Hopefully it boots. Or the HD craps out altogether, there's still warranty on it... Had anyone reporting similar stuff with a big HD like mine is? Windoze is good for all kind of surprising crap, but I need it for development...

Thx,

Wolf

(!) [JimD] Wolf, I promise, no one was going to interrupt you. [This whole messages was all in one line of text!] Perhaps a few paragraph breaks would have also helped.
(!) [Heather] Heh. The advantage of email is that folks can use paragraphs, commas and periods - but still get their whole say before anyone gets to interrupt.
To give the short form of the answer -- it's not an OS error at all. Bootloaders come before the OS, whether DOS or Linux. It's a dependency on firmware features - the BIOS on your system either does, or doesn't, have 1024 cylinder problems. If your BIOS doesn't have the boundary, the bootloader still has to make different calls to ask about later areas of the disk, because this is a newer feature, and is tacked on to the BIOS design.
I can just about guarantee that the final 8 Gb on a 40 Gb drive would be above that boundary!
LILO has an old keyword to beat this boundary (linear) and a new way (LBA) but this drive is so large you may need an even more special call (LBA32). Linear basically asks for cylinder/head/sector stuff. The LBA flavors tell LILO to make the new BIOS calls.
You mentioned the drive but not the motherboard - assuming it's modern enough, I'd try the LBA32 keyword in /etc/lilo.conf. (On a line alone.) If that doesn't work I'd probably use Loadlin, throw an icon for its correct command line on my desktop (looking like Tux of course) and forget worrying about it. Just remember to copy your kernel to the right place on your FAT drive that loadlin expects, whenever you decide to update your kernel.
Unfortunately, that you're seeing the drive poorly when the 8 Gb are FAT partitioned or ext2 partitioned, implies that I may be wrong about how modern it is. Or, the drive has memorized something poor about how to present itself to us. (Yes, drives have had their own brains for a while, that means they get to be artificially stupid sometimes too.) So, I'd check if your motherboard manufacturer has a BIOS revision, because it may help some. And during low level format I'd be really extra picky about looking through the options, in case something leaps out as meaning "My motherboard is so stupid I can't even see all 40 Gb. Just give me 32 of it. Thanks." Or as we'd say, ack! No thanks!
(!) [JimD] I'm responding to your message to dispel the misconception that you've repeated here.
Linux has no problem at all with anything past the 1024th cylinder. The Linux kernel can handle any commodity drive.
(!) [Heather] vmlinuz doesn't care, no more than Windows should care how big its C: is - by the time you're this far, you're in protected mode, and not using the BIOS directly anymore. It's fdisk that gets all the headaches.
(!) [JimD] However, the bootloader (LILO) has traditionally been constrained by the level of support offered by your system BIOS (or the lack thereof). Once you get the Linux kernel "bootstrapped" (loaded into memory and running) then it can easily handle just about any arrangement of partitions. LILO has to ask the BIOS to locate and load the specific device and blocks in which the kernel (and any initial RAM disk) are stored. So the BIOS must support calls to access these devices. If the BIOS only supports calls to handle the first 1024 cylinders of a device (a common constraint several years ago) then we have to locate the kernel (and our RAM disk) within those 1024 cylinders. Alternatively we can use a different bootloader (syslinux off of a floppy, or Zipdisk, etc; grub, LOADLIN.EXE, etc).
Now over the years that have been several different workarounds to this problem. First we note that SCSI drives have normally not been afflicted with these limitations (since they don't emulate the old WD-1003 controller interface; they have their own BIOS extensions which provide the necessary support through the "INT 13" calls). Also we note that this problem is specific to the PC (so it's never been a problem on Macs, SPARCs or the many other platforms that Linux supports).
Also most IDE drives, though they mostly emulate the ST-506 interface, mostly as implemented by Western Digital's old 1003 chipset; they will perform their own "autotranslation" internally translating "virtual head" addresses into larger cylinder numbers. Later these drives dropped all pretense of using cylinder/head/sector (CHS) co-ordinates and used a technique called LBA (linear block addressing). That basically means that any block request it gets (which comes in the form of a cylinder/head/sector triplet) is translated into a single number (multiple the three numbers in the triplet together) and that is fetch according to the drive's own indexing and mapping. BIOS' then started supporting LBA which overcame the 8Gb limit.
Meanwhile the latest versions of LILO support the necessary BIOS call extensions to boot from any cylinder on any IDE drive; so long at the PC BIOS in question also supports the extension.
I've described alternatives to LILO MANY times in our column. Since this is almost always an issue of installing Linux unto a system that already has a copy of MS-DOS (or any of it's ilk, Win '9x, OS/2, etc) then it's usually easiest to configure you system to boot into MS-DOS and to run a program called LOADLIN.EXE to load your Linux kernel. Because MS-DOS is being booted from "the first" partition on the first or second drive (the only supported configuration) and because it has access to the "rest" of the drive (with it's device drivers, and various Microsoft supplied extensions) then LOADLIN can load any kernel that MS-DOS can "see."
Anyway, this issue is old and obsolete. Please reconsider before you repeat this misconception any further. This is not a "Linux" problem. It is a PC problem which has been faced (and addressed) by many Linux users *because Linux doesn't impose arbitrary constraints on how you configure your filesystems*. Linux doesn't make you install it in the first drive or the first partition, etc. Unfortunately with the freedom has common the confusing choices that have caused so many questions among Linux users.
(!) [Heather] Having choices available, means actually having to make choices. It's a tough job sometimes but I vastly prefer it to the alternative.
This is the part where I rant about how if we improve the documentation enough at least these can be informed choices.
(!) [JimD] LILO and the related questions are confusing to converts from MS-DOS and Windows, and they are just as confusing for old hand UNIX users coming from RISC platforms, and for converts from the old SCO and other PC UNIX platforms.
(!) [Heather] Yup. There are other bootloaders around too, which are easier for some but each has their own new flavors of problem. And not all of them can get over this 1024 thing, which is to say, they actually expect the BIOS to be helpful. Can't always trust that. Welcome to the PC.

(?) about the adaptation.

From Meltem YAGLI

Answered By Ben Okopnik

(?) Hello, I am the researh assistant in Eastern Mediterranean University, and i am doing master in computer engineering department. About the adaptation of linux with other operating systems, i have a problem. if you can help me, i will be very happy.

I have a program that is written in C and my operating system is linux now. This program has been done before on dos operating system, (it includes stdio.h, stdlib.h, conio.h, etc..). so, if i want to run this program, linux can not find conio.h. (there is no such library file in linux) Could you help me for this. i wonder, is there any corresponding file in linux that can do the functions of conio.h? The only chance is to include this corresponding file to my program.

(!) [Ben] <Smile> I certainly hope that it's not the only chance. If you take a look at the very top of "conio.h" on a DOS machine, you'll probably see something like the following (this is from the "conio.h" that came with the old Borland Turbo-C):
/*	conio.h

	Direct MSDOS console input/output.

        Copyright (c) Borland International 1987,1988,1990
	All Rights Reserved.
*/
That's why there's none in the standard C 'include's for Linux: it's a DOS-specific library! Now, I've done very little C programming in the last few years - mostly just little quick things - so I haven't had to deal with any fancy console stuff, and no need for anything like "conio.h". If I did, the library that is commonly used in Linux for console I/O is "curses.h". Take a look at libncurses5-dev; you'll have to do a bit of rewriting, since Linux handles console I/O differently from DOS, but it shouldn't be too bad.
Good luck!

(?) Changing the "login-sequence" in Linux?

From Alf Kato Brandal

Answered By Ben Okopnik

(?) Hello! We are working on a student-project where we want to get a Perl-script to log us in to Linux. Do anyone know how we can do this? Please do not just answer that we need to use the "Expect" module in Perl, cause we don't understand how we can use this. Is there any other way?

(!) [Ben] Erm... let's run this by again. You don't understand how to do something in a certain language which you don't know... so your solution is to switch to another, more complex language which you don't know. Does this make any sense to you?
Read the "expect" man pages and documentation. It is probably the best tool for what you describe, since it is written for exactly this purpose. Put in a little effort, and you'll get back the result you're looking for.

(?) Linux, X, Dell Video Card

From Glenn Martyna

Answered By Ben Okopnik, Daniel S. Washko

(?) I have loaded Red Hat Linux 7.0 on my new Dell computer, (4MB Integrated AGP Video Card,i810 chips set) however, X windows will not start. I tried loading Xfree's 3.3.6 instead of higher level of X present in the Red Hat release. Now I get an error that says "_X11TransSocketUNIXConnext" failure.

(!) [Ben] The Evil TransSocket Error :) simply means that your X server can't connect to your video hardware. Usually, it means that you've chosen the wrong server. I'm not even going to speculate on the problems that you may have created by installing XFree 3.3.6 over a 4.0 installation (at least that's how I'm reading it), but I'm certainly not brave enough to try something like that.
Also, if I remember correctly, there was either no or minimal support for the i810 in 3.3.6, but there was in 4.0. You may want to reinstall, this time with 4.0, and see what happens.
(!) [Daniel] I have a Dell Optiplex 110 running RedHat 7.0. It has the Intel 810 chipset and required this driver: http://appsr.intel.com/scripts-df/filter_results.asp?strOSs=39&strTypes=DRV%2CUTL&ProductID=178&OSFullName=Linux*&submit=Go%21
I hope this link works for you, if not, it is the first selection if you do a google search on "intel 810 and linux." The driver works with XFree 3.3.6 and up.

(?) sendmail

From Kenneth Moad

Answered By Jonathan Markevich, Dan Wilder

(?) I am trying to have sendmail send the contents of a file to an email address. I want to do this from the command line though.

I think the command is something like [sendmail -t <<fff] but that does not seem to be working correctly.

(!) [Jonathan] I like to use a MUA rather than a MTA for this... I prefer mutt (of course).

mutt -s "Here's the file" -a ~/procmail.log root@localhost
(!) [Dan] On the other hand, sendmail is possibly more suitable for scripting applications, such as automatic email notifications of irregularities in logs. It offers much better portablility in scripts, and better control over headers. For example, using sendmail you have direct control over "From:" headers, which can be something of a trick with various MUAs. If needed, you can script-generate MIME attachments, secure in the knowledge that they won't be mangled by an MUA that thinks it knows more about what you want than you do.
Most MTAs offer a "sendmail" binary with at least some command line compatibility. I've used Smail, Exim, and Postfix in preference to sendmail these last ten years, and the following works just fine with all of them. Most likely, it also works with Qmail, MMDF, and anything else that attempts to offer some sendmail compatibility.
"sendmail" may not be on your path. Try

which sendmail
and if it doesn't get you anything, use the full pathname. The usual location these days is /usr/sbin/sendmail. On older systems try /usr/lib/sendmail. If that doesn't work, try "locate sendmail".
To use "sendmail -t" you put the headers in the source document, with no intervening blank lines, then an empty line, then your email text. For example (drop the indents in the real thing):
From: me@someplace.com
To: nobody@noplace.you.know.com
Subject: email test

This is a test.  If this was a real email you would
have been asked to read it.  This is only a test
The "<<" construct you mention is a so-called "here" document. The above example, in the context of such, would look like (again, delete any indents or ">" quoting):
/usr/sbin/sendmail -t <<fff
From: me@someplace.com
To: nobody@noplace.you.know.com
Subject: email test

This is a test.  If this was a real email you would
have been asked to read it.  This is only a test.
fff
This can be very handy for scripts, as the shell expands shell variables that may appear inline. So:
WHAT="small armadillo"
/usr/sbin/sendmail -t <<fff
From: me@someplace.com
To: nobody@noplace.you.know.com
Subject: email ${WHAT}

This is a ${WHAT}.  If this was a real email you would
have been asked to read it.  This is only a ${WHAT}.
fff
will expand what was previously "test" as "small armadillo".
To use the contents of a separate file, say, a file called "fff", use
/usr/sbin/sendmail -t <fff
note the single "<". The contents of the file need to be the same as in the "here" document, no blank lines before the end of the headers, headers including at least "From: ", "To: ", and "Subject: ", then an empty line, then the body of the email.

(?) Thank you very much for the help! I decided I will use the "<" instead of the "<<" in my script thanks to your email. You also gave me a couple of other ideas too.

shade


(?) about a stubborn mount error

From Gabriel Florit

Answered By Heather Stern

Dear Linux Gazette,

I come to you hoping that I might finally solve this problem. I have extensively searched newsletters and IRC sessions, but nothing. Most users give up after an hour or so, telling me they have no clue. I hope you do... :)

(!) [Heather] Interesting. What suggestions have they offered that didn't work?

(?) I am running RH7. I have two hard drives, a 10G and a 40G. The 10G is the master one, no partitions, and it is where i have my win98 system. The 40G is divided into four, where I have a swap, two linux natives, and a dos partition, as storage for the win98 system. Now, when I am in Win, I see both C (the 10G) and D(the dos partition of the 40G). But when I am in linux, i only see hda1, that is, the 10G drive. sfdisk -l tells me that the dos partition in the 40G drive is hdb7, but when i try to mount it using


mount -t vfat /dev/hdb7 /mnt/win

or


mount -t msdos /dve/hdb7 /mnt/win

i get an error that says

mount: wrong fs type, bad option, bad superblock on /dev/hdb7,
       or too many mounted file systems
(!) [Heather] Let's investigate each of the three points.
You're certainly able to see the rest of /dev/hdbN, otherwise, your complaint would be about Windows making Linux not work.
Wrong fs type:
There are numerous partition types usable by Windows these days. You mention that sfdisk -l says /dev/hdb7 is your dos partition, but not which type that it is.
I've been hearing that WindowsME has slightly tweaked their partition type; this gave both Partition Magic and parted fits. So... while in most cases we here at the Gang wouldn't care... which flavor of Windows do you have, and do you have any Security updates or service packs? How did you make the D: drive?
Anyways I assume that you have the msdos and vfat filesystem support properly installed since you say that getting /dev/hda1 mounted isn't a problem. So my first guess would be that /dev/hda1 and /dev/hdb7 are different flavors of DOS partition.
Bad option:
Your command lines looked okay to me. Assuming /mnt/win exists.
Bad superblock on /dev/hdb7:
Well, I suppose there might be something subtle that really is wrong with your D: and Linux is just being extra super duper cautious. So perhaps you have run the Windows disk checker with all the "yes check everything thoroughly" options turned on. (As opposed to their normal mode, where they skip time consuming things like looking for bad spots on the drive.)
just curious, does /dev/hdb7 straddle the 1024 cylinder boundary? I've never heard of mount caring about that, but, it is in the middle of a huge drive, so...

(?) I have created the win dir in the mnt dir. (lots of people seem to ask me that).

(!) [Heather] :) I would have created /mnt/c, /mnt/d ... but that's just because, if my client is a serious dual booter, they continue to think of the windows parts as "drive letters" so this is good for keeping them from getting mixed up. (Simple enough: once it's mounted, it's a drive letter.) So, I often use /mnt/a for floppy access forced to vfat fs, in case I have any trouble with a DOS floppy.

(?) I up2dated everything but the kernel, as suggested.

(!) [Heather] So you have the current stock kernel for RH7, which version is that? I think it weird that you weren't given suggestions to rebuild a kernel and leave everything else alone - only the kernel, its modules, and mount should have anything to do with your problem.

(?) Still nothing. The odd thing is that I can access the hdb7 from windows. I can even write to it. But in Linux, RH7 using GNOME, I can't.

(!) [Heather] GNOME has nothing to do with it... or it shouldn't. Have your tried logging into a plain text console as root to do this?
If you come up in "linux single" (by typing that at your boot: prompt) you should be in the same state that the mounting mechanism from /etc/fstab is in when doing its original mounts.

(?) I have asked many different linux users. None can help me. Hope you have an idea of what's going on.

(!) [Heather] Well, first we have to discover what it is, then maybe we can figure out why it's doing it. If it's the mount command at fault, we'll have to look in mount's sources for its maintainer.
Hmm, here's an idea, if you have spare space on one of your other partitions equal or greater than the complaining partition, you can make a binary copy of it to a file:

dd if=/dev/hdb7 of=/usr/local/bigspace/D-driving-me-crazy
Yes, this will take a while. Might want to add bs=1024 or even bs=4096 on the end so it will grab things in chunks. I think that should work even if the partition image isn't an exact multiple of the blocksize... but one of the Gang who plays more with dd than I do should comment on that.
Then, you can ask file if it looks like what you think it is:

file /usr/local/bigspace/D-driving-me-crazy
And if it agrees that it's a filesystem image, then try to loopback mount the file:

mount -o loop -t vfat /usr/local/bigspace/D-driving-me-crazy
...which is not a good solution to your problem, but would pinpoint that mount can, or cannot, mount this flavor of DOS partition. If it can't, then factors to consider are its size, and what type it really is; having got this far it probably wouldn't be a cylinder problem, since the image is at a new location.

(?) Regards,

Gabriel Florit

(!) [Heather] Well, let us know if these thoughts shed any light on the matter!

(?) Dear Linux Gazette,
or The Gang,

Thanks very much for your prompt response! I will follow your advice and let you know as soon as possible.

Cheers,
Gabriel Florit
(the guy with the mount problem)


(?) How do I choose?

From Serge Wargnies

Answered By Heather Stern

Hi,

I may confess that I am coming from another platform and wants to migrate to the open world ...

My question which distribution choosing ... I am lost between caldera, redhat and other SuSe ...?

(!) [Heather] Nope, not a stupid question at all. You actually want to take a look at the differences before picking one; sounds wise to me.
We have had this question fairly recently and discussed it in some detail. Basically, you need to know what kind of things you want to use your Linux for, and what things various distros are aiming to be -- then, you can pick one that is trying to head your direction, and you have a much better chance of picking usefully.
The Gang expounded on this in Issue 60, "Best Linux Distro for a Newbie...?" (http://www.linuxgazette.com/issue60/lg_answer60.html#tag/4) and I hope you'll find that answer useful too. If these aren't quite enough, let us know what you're thinking of, and we'll try to help out a bit more.

(?) Thanks very much in advance... Regards Serge Wargnies

(!) [Heather] Welcome to the world of Linux, Serge. I hope you'll find your first forays here pleasant.

(?) Thank for you answer, I read the Gazette but I am still a bit confused so ...

(!) [Heather] I have cc'd back in the linux-questions-only@ssc.com address, so the rest of The Answer Gang can see, and reply if they also have more comments.

(?) I am coming from the Windows world, started as a developer on Windows 2, 3, 3.1, 9X and NT/2000. I also did a lot on the system side as well as acquired a certain knowledge on Databases ...

As you mentioned, it depends what you want to do with it. I don't search for a new graphical environment, I look for acquiring some new knowledge on a growing system that - if I am not that wrong - is quite close to UNIX. Because as my experience should tell that I am not too much into console mode only, I want something with a GUI which is not available on my home PC at this time...or I am wrong...???

(!) [Heather] Well, you can take a look at a bunch of the screenshots over at LibraNet, because they give a very clear sense of what the K desktop looks like. The Gnome desktop also looks very similar. (http://www.libranet.com)
If those are close enough to a usable GUI for you then you can probably do okay with most of the nicer distributions, and the next concern would be making sure that you have a decently safe bet on a clean install, followed by an interest in good access to developer tools.
If that's too different from the GUI you enjoy, there's a Window Manager named fvwm95 that's designed, as you might guess, to be a really close match. That means the task bar acts the same, for example. There will still be slight differences.
Once you start to get used to a few applications, you can play with loading a few other Window Managers and see if you like some of the others; many have interesting extra features.

(?) I plan to learn about the environment but once it is installed, I don't want to spend 10 weeks - I am not often at home - to have the PC installed...This has to be done in one shot. I will learn from the system after...as well as starting to do some development - porting application from Windows to LINUX/UNIX ....

(!) [Heather] So you probably want to try following the Willows Software Twin API (http://www.willows.com). Or stick very closely to GTK+ and follow the same style that The Gimp did, since it has a win32 version as well as a linux one you get a successful example to study the similarities and differences. GTK has its own site, http://www.gtk.org and so does the Gimp, http://www.gimp.org
...and if having to run an occasional, but possibly well behaved Windows binary on your Linux is interesting, you'll also want to keep an eye on the WINE project (http://www.winehq.com) which is trying to provide a support layer for win32 binaries to be run directly within Linux and a few other OS'.

(?) So what can I do, doctor?

Serge Wargnies

(!) [Heather] Okay, so we want to get you into a developer-friendly install, but not one that expects you to be a guru during the installation itself.
Most packages available out there are available in at least one of 4 states: source tarballs (you get to build it yourself; if you're lucky that's only 3 commands, not very hard, and listed in the README or INSTALL textfile for the package), Redhat style rpm files, rpm files for non-redhat derivitive systems (like SuSE or TurboLinux), and Debian packages. Mandrake and some other Redhat derivitives can share Redhat style rpms. Stormix, CorelLinux, Libranet, Progeny, and Debian itself all can share deb files.
If you decide that a debian based system works for you, then I'd definitely choose either Libranet or Corel, instead of "the real debian" installer, because in your case, you're not a Linux expert; even though the Debian installer is ok, the boost in helpfulness that these commercial distros provide during the install would be extremely worthwhile for you.
If you're afraid of repartitioning but still want an fairly easy install of a "big name" Linux distro, consider BigSlack... the version of ZipSlack (http://www.slackware.com/zipslack) that includes X and Gnome, but can be installed directly into a FAT filesystem just using PKunzip. Slackware has been around a long time and is well known as being friendly for people who like to work directly with source code.
If you decide that because there are lots of Redhat-style packages out there, you need a redhat compatible system, I guess Mandrake would be worth a try. Make sure to get a really recent version or buy it direct tho, because they had some bugs during install that they fixed recently, and you wouldn't want to get nailed by one just because the local store had a dusty copy.
I notice you're not in the U.S. so if English isn't your native language, maybe there's a localized variant that would be handy for you. Linux Weekly News lists a whole bunch of them (http://www.lwn.net) in its Distributions sidebar. Some of the major distros support many languages too.
Backing up the system in its current state is a good idea, not so much because of the risk (well, yes, there's some, not horridly bad) but because now is a good time to decide what's important and not on your machine; it will be good to have if there's any sort of trouble, not just linux install issues. For example, a power outage right when you've almost got things humming :(
Let us know if you need more!

(?) Thanks very much fir the answer, I guess you have summarized the situation pretty well ...

I will follow the links...

Regards
Serge Wargnies


(?) I was wondering

From andrew

Answered By Mike Orr

i see a number of suspicious files in my proc directory.For example there is a directory that is called 6 & when i look in this folder i see i number of files eg

[root@echelon 6]# ls -la
ls: exe: Permission denied
ls: root: Permission denied
ls: cwd: Permission denied
total 0
dr-xr-xr-x    3 root     root            0 Mar 26 14:28 .
dr-xr-xr-x   89 root     root            0 Mar 26 07:32 ..
-r--r--r--    1 root     root            0 Mar 26 14:29 cmdline
lrwx------    1 root     root            0 Mar 26 14:29 cwd
-r--------    1 root     root            0 Mar 26 14:29 environ
lrwx------    1 root     root            0 Mar 26 14:29 exe
dr-x------    2 root     root            0 Mar 26 14:29 fd
pr--r--r--    1 root     root            0 Mar 26 14:29 maps
-rw-------    1 root     root            0 Mar 26 14:29 mem
lrwx------    1 root     root            0 Mar 26 14:29 root
-r--r--r--    1 root     root            0 Mar 26 14:29 stat
-r--r--r--    1 root     root            0 Mar 26 14:29 statm
-r--r--r--    1 root     root            0 Mar 26 14:29 status
(!) [Mike] This is normal. See "man proc".

(?) Notice the permission denied on those 3 files. Why is this if i am root??.

(!) [Mike] I get this error when I'm not root but not if I am root. The three "files" are symbolic links to other directories. So it would depend what the permissions of those "other" directories are.

(?) I cant delete them or change anything about them. What would you suggest?? I mean they are links to other files so why can i just unlink them.

(!) [Mike] You shouldn't try to change or unlink them. The directory will disappear when process 6 dies.
To see for yourself that nothing funny is going on, run "umount /proc" as root. (If you get a "Device Busy" error, it probably means some process has its current directory inside /proc. You cannot unmount a filesystem if somebody's current directory is inside it.) The /proc directory should be empty now. Run "mount /proc" or "mount -t proc proc /proc" and the "files" should reappear.

(?) Also as a side note do you have any idea that when im in shell within this directory that those 3 files are flashing??

(!) [Mike] That's part of the color configuration of the 'ls' command. Usually, flashing means it's a dead symbolic link (a link pointing to a nonexistent file). If it's inside /proc, I would assume the kernel knows what it's doing and not worry about it.

(?) More observations of a cardboard box

From Randjbarnhart

Answered By Heather Stern

(?) Maybe the lady who asked about the cardboard box is a spanish teacher. I recently got a version of Don Quixote that talks about him fixing up old knight accessories with cardboard. Since Cervantes wrote the book in the 1500's, I was wondering when cardboard was first used. I know that when my students read this, someone is going to ask about it. Maybe the word "carton" meant a thin type of board used for crates or something. The footnote says cardboard but it has to be wrong --don't you think so? Anyway, sorry for boring you. Just wanted to express empathy for the cardboard box lady.

(!) [Heather] The cardboard box is now infamous in LG (as a reference from Issue 52)
Our only previous mention of cardboard before that had been to describe that chroot (while imperfect) was better than someone's attempt to keep his users safely trapped in their home directories.
There's a more complete history of packaging than we found last time at:
http://www.ag.ohio-state.edu/~ohioline/cd-fact/0133.html


This page edited and maintained by the Editors of Linux Gazette Copyright © 2001
Published in issue 65 of Linux Gazette April 2001
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/

More 2¢ Tips!


Send Linux Tips and Tricks to gazette@ssc.com


Re: Graphics Programming for Printing / Faxing (Issue 60)

Tue, 27 Feb 2001 16:32:59 -0800
Anthony Greene (The Answer Gang)

Re: Graphics Programming for Printing / Faxing (Issue 60)

The quick and easy way for a Perl programmer to do convert data to faxable invoices/reports is to output the data as HTML, convert it to Postscript using html2ps <http://www.tdb.uu.se/~jan/html2ps.html>, then fax the result using efax or mgetty+sendfax.


Alliance Pro-Motion driver

Sun, 25 May 1997 23:23:08 -0400
Ralph E Bugg (buggr from sssnet.com )

There was a letter to you from an unidentified person looking for a driver for an Alliance Pro-Motion video card.

"....Anyways, I would just run Linux but my problem is that Xwindows doesn't have advanced support for my video card, so the best I can get is 640x480x16colors and I just can't deal with that. Maybe I'm spoiled. The guy I wrote on the Xwin development team told me that they were working on better support for my card, though. (Aliance Pro-Motion). ...."

If he goes to http://www.alsc.com and follows the path to tech support, he will find a SVGA driver (no source code though) for X-windows. I am using an NEC Ready 9618 system which uses one of the Alliance chips on the mother board. It took a LOT of fiddling with the configuration file but it will work at higher resolutions @ 256 colors.

Hope you can pass this on to him.

Thanks, Ralph Bugg.


How to avoid launching Midnight Commander by accident

Mon, 26 Feb 2001 10:31:51 -0500
Allan Peda (apeda from linkshare.com)

I've typed "mc foo bar" one time too many when I really meant to type "mv foo bar". Removing Midnight commander is not an option, because that breaks some file exploror type GUI utilities, so I cooked up a bash script to double confirn that I wanted to type what I (probably mis-)typed :

See attached script mc.bash.txt


Gazette, I.55, Answer: Missing Root Password

Wed, 28 Feb 2001 17:23:18 +0100 (CET)
Johannes Kaiser (uehj from rz.uni-karlsruhe.de)

It should be easy to get in if you use LILO. At the boot prompt, type in the name of your boot image (you can find that out by hitting the "tab" key twice), followed by the word single. For a normal redhat installation, typing "linux single" should do. You also can append "init=/bin/sh" instead of "single", that leaves remounting your root filesystem rw to you.


SNMP Tool for networking (re: March tips)

Thu, 1 Mar 2001 17:02:10 +0100
Casas Bouza, Robert (robert.casas from puig.es)

Hi!

About the question done by Antonio Sidona (looking for a SNMP tool for networking, tips) on you March 2001 issue, we have tried netsaint (www.netsaint.org). It's a great tool, although needs to be configured properly, but you can monitor any system that support SNMP or not. A funny thing is that we HAVE HP OpenView installed, but you need license per console, and NetSaint can be installed on a Web Server and accessed through a browser. We actually used them on a complementary basis.

Robert Casas


distro version upgrade? (slackware)

Thu, 01 Mar 2001 13:45:43 -0800
Michael Moore (michael_moore from csnw.com )

Dan Blazek wrote:

Hi, I think I'm running Slackware 2.2 (kernel is 2.0.27 for sure anyway). Is there some kind of cluster or patch bundle I can download to upgrade my box. Like a single package I can install to at least jump up to slackware 3? And if there is.. can you please tell me where to find it, and if there is there a special way to install it? Or am I going to be stuck installing a new image?

Heather wrote:

I thought there wasn't one, but rarely say so without looking. And what do you know, I found:

slackUp - The Slackware Auto-Upgrade Utility
http://xfactor.itec.yorku.ca/~xconsole/download.html

You should read its readme yourself, to check that it can handle your version. If it can't. get involved with the authors ... they haven't updated it in almost a year (or at least the webpage) and you may spark an entirely new round of development for the project.

David Cantrell, one of the Slackware staff members, has also made a pretty comprehensive Slackware upgrade utility, autoslack. While this is not a supported Slackware project, David's involvement probably means it is likely to work well with their site. You can find it on their unsupported projects server at http://zuul.slackware.com

-Michael


2-cent Tip: Cleaning up after Netscape

Thu, 1 Mar 2001 17:51:39 -0500
Ben Okopnik (The Answer Gang)

Linux is a wonderfully reliable OS: even the software that runs under it is reliable. X Windows runs reliably. Midnight Commander is reliable. Even Netscape Communicator crashes reliably.

Ooops...

Netscape is a nice piece of software, in that it supports everything (and then some) that a modern "fancy" browser should support. Unfortunately, the rate at which it goes down brings to mind expressions about hookers on payday - and in my experience, it's been this way from day one. Not only that, it tends to leave behind hung copies of itself (which makes the processor load shoot right up into the red) and lockfiles that create error messages the next time you try to start it up.

A few months ago, tired of having to clean up the random garbage, I created this script. If Netscape has crashed, or is simply frozen, it will take care of everything. Nowadays, it's my automatic response to a Netscape crash. <sigh> I'm getting awfully familiar with typing "notscape"...

See attached script notscape.bash.txt


Regarding backups [http://www.linuxgazette.com/issue64/tag/28.html]

Thu, 1 Mar 2001 19:02:35 -0500
David Jao (scythe from dominia.org)

Hi guys,

This is in response to Bruce Harada's message at

http://www.linuxgazette.com/issue64/tag/28.html

I would have preferred to contact him directly but I could not find an email address for him on the page.

Using gzip on backup files 2GB in size is a really bad idea, since if the compressed file gets corrupted at any point, then everything occuring after the point of corruption will be unrecoverable.

Of course if hard drives are perfectly reliable then corruption is no problem, but if that were the case then you wouldn't be doing backups anyway.

In general, compressing large backups is almost never worth it because of the reliability issues. If one must use compression, bzip2 is a better choice, since it uses 900kB blocks and corruption would only affect an individual data block.

-David


Modules cannot load with kernel recompile

Thu, 01 Mar 2001 22:39:33 -0500
Tom Walsh (tom from cyberiansoftware.com)

Regarding 'http://www.linuxgazette.com/issue64/tag/16.html', I use 'make install' myself, saves you the step of copying the image to /boot and forgetting to run lilo.

-- Tom Walsh


RE: Linux PPP route question

Fri, 02 Mar 2001 14:06:07 -0600
Brian Finn (nacmsw from airmail.net)

Hi,

I found a dial-on-demand package for Linux called Diald. I think it may help alleviate your PPP problems. You can find it at:

http://diald.sourceforge.net

Hope this helps!
Brian Finn


"Interrupt for Linux" question from S. Auejai

Mon, 05 Mar 2001 12:09:38 -0600
Bill McConnaughey (mcconnau from biochem.wustl.edu)

I found Alessandro Rubini's book, Linux Device Drivers, published by O'Reilly and Associates, very helpful in getting started on writing device drivers (including interrupt handlers).


2ct tip - Removing temp files

Tue, 06 Mar 2001 20:58:25 -0800
forsberg (forsberg from adnc.com)

When writing a program that uses temporary files on a UNIX/Linux system it is convenient to use a feature of UNIX. Create the temporary file, then remove it (i.e. unlink() ) without closing the file.

fd = fopen("/tmp/somefilename.tmp",...);
unlink("/tmp/somefilename.tmp");
.
. Use temp file
.
fclose(fd) or exit();

Then you can read and write to this file during the existance of this process. The temp file will not be removed until a close statement on the file descriptor or the program terminates. Only then will the kernel remove the file. Use this technique to guarantee that all temp files are cleaned up if your program crashes.

Bruce Forsberg


Linux RedHat question

Wed, 28 Mar 2001 12:18:34 -0800
Ray Hanes (high_tech_hanes from yahoo.com)

I saw your page and don't know if your still actively maintaining it and answering questions but in case you are. I'm trying to find a variable for what version of Red Hat is running. If there is no variable for it from the system then how can I get a script file to detect the Distribution Verson assign it to a variable?

Hi Ray --

On a default RedHat install, the file /etc/redhat-release contains the version. Most RedHat installs leave that file there. (I always delete it because the existence of that file causes the rc.local script to overwrite /etc/issue at bootup.)

Hope this helps -
--
Breen Mullins


Question on stty

Tue, 20 Mar 2001 11:46:06 -0800
Iris Louie (IHo from altera.com)

I have to type in stty erase "backspace" each time I log in. How can I get set it as apart of the default stty setting?

Put the command in your ~/.bashrc file or whatever file your shell reads at startup. -- Mike


inode related question

Fri, 16 Mar 2001 09:57:20 -0800
HCL Amritsar (narenderpk from usa.net, tag from ssc.com)

in unix file system if inode of current directory is known .explain how to find the inode of the file ../file1.

$ ls -i ../joey/.bashrc
 407098 ../joey/.bashrc

-- Mike


Protecting web pages

Mon, 26 Mar 2001 10:05:00 -0800
Doranda L Martin (anonymous)

Hello,
my name is D and i have a web page. I have a question. I would like to know how to put an entry box in my web page. Actually i am trying to hav it so that you must have a password to get to certain parts of my web page, basically the table where my poems are and then have a way to make them have to enter a password to look at the poems if someone accidentally got to the table. i would like:

box 1: their email address
box 2: password
submit

please help, if you could send me codes or somewhere to go or anything it would be a great help

If your web server is Apache and it has been configured to support (1) HTTP Basic Authentication, and (2) .htaccess files, do the following:

  1. Use the htpasswd program to create a password file. (This is not the UNIX password file; for security, you should use different passwords than your login passwords.)
  2. Create a file called .htaccess in the highest-level directory you wish to protect. The file should contain:
AuthName "Poems"
AuthType Basic
AuthUserFile /path/to/htpasswd/file
require valid-user

Now, when the user tries to access anything in or under that directory, the browser will prompt her to type her "Poems" username/password. If she does not type it correctly, she'll get an "Unauthorized" error.

Your Apache configuration file must "AllowOverride AuthConfig" for either the entire site or the portion of the site you're concerned about.

See the Apache documentation: http://httpd.apache.org/docs/mod/mod_auth.html and http://httpd.apache.org/docs/mod/core.html#allowoverride
-- Mike


SSH article

Tue, 6 Mar 2001 14:55:51 -0800
Bryan Henderson (bryanh from giraffe-data.com)

In the article on ssh, scp, and sftp in the March issue, there is an important area that isn't covered: client/server compatibility.

If you're just doing a basic ssh (to get a remote shell), you're using a standard SSH protocol and any program named "ssh" is likely to work with any remote system that offers a service it calls "ssh."

But scp and sftp are not standard protocols. If you run the scp program from openssh against a remote system that's running an original ssh server, it will not work. (And when I learned this the hard way, it was very hard indeed: the error message isn't "this server doesn't implement this scp protocol." It is, for reasons that took a day of debugging to figure out, "invalid file descriptor"!

-- Bryan Henderson

This was also forwarded along to the author of that article for comment, but we got no reply by press time. -- Heather


Linux commands

Wed, 14 Mar 2001 09:13:48 -0500
katja.andren (katja.andren from spray.se)

Hi!

I'm new Linuxuser (Redhat ver.) and I'm loking for a summery of commands, "Linux version of DOS-commands". Do you have any good tips on where I can find it?

As it happens, such a thing exists. The summary, as well as a lot of other useful tips for those who are used to DOS or Windows, are all included in the DOS-Win-to-Linux-HOWTO. Take a look at "/usr/doc/HOWTO" (if you have them installed on your system - if you don't, you should!), or <http://www.linuxdoc.org>; for the latest version. -- Ben


How write a selfextracting sh script ?

Thu, 15 Mar 2001 07:35:49 +0100
Josep Torra Valles (jtorra from campus.uoc.es)

I would like to know how write a selfextracting sh script with a tar.gz(source code of my program) to be installed, and after it's extraction I need run make in order to compile and finish the installation.

Thanks in advance

Strange as this may sound, about a year ago, I wrote a shell script that does exactly that - including automatically running "make" or another program to process the files. I even packaged it as a tarball, with documentation, configuration files, and even a man page... but I never released it. Why? <shrug> There are a lot of tangled issues, including the fact that this mechanism can be easily misused for malicious purposes. On the other hand, so can anything that you download off the Web and execute without checking it out first. Whatever, your e-mail here has spurred me to go ahead and make it public: you can download "SFX" from my site, as <http://www.geocities.com/ben-fuzzybear/sfx-0.9.4.tgz>;. If you run it without any options, it'll tell you how to create files that will self-extract and compile, all in one shot. I also took some trouble with the documentation; the "method" files are a pretty cool way to specify action after extraction, and you can always create your own.

I'd really appreciate feedback from anyone who ends up using SFX; if there's enough interest, I'll rewrite it, possibly in C or Perl.


Searching for a text revisioning tool

Sun, Mar 11, 2001 04:48:16PM +0100
Peter Paluch (peterp from frcatel.fri.utc.sk)

Hi,
=-=

I often do revisions and checks of articles and text documents that my colleagues wrote, and under Linux I miss the ability of MS Word97 and above which allowed me to do revisions very conveniently. Under "revisioning" I understand writing several marks and suggestions for the author to the revisioned document, striking-out whole words or sentences and replacing them with new ones.

I'm thus searching for a Linux document revisioning tool. It would be lovely if the tool worked with XML. Do you know anything that could help me? (Please notice that CVS is not what I need.)

Thanks a lot in forward.

Have you taken a look at WordPerfect 8 for Linux? I don't have it installed on my current machine, but I seem to remember seeing some kind of revision-type stuff in the menus. -- Ben


2.4.2 and loop devices

Tue, 13 Mar 2001 22:42:11 -0800
David Ellement (david.ellement from home.com)

I've recently compiled the 2.4.2 kernel (under RH 7.0). It seems I can no longer run any commands the interact with the block loop devices: mkbootdisk, mkinitrd, mke2fs /dev/loop*, mount -o, ... If I run one of them, they hang at mke2fs /dev/loop; if I try to halt the system afterward, it hangs trying to shutdown the file-systems.

I've tried to compile with loop device support as a built-in, and as a module (and lsmod show it loaded). What am I missing?

... but he managed to discover for himself ...

The 2.4.2 kernel has a bug which caused a deadlock for loop devices. It is fixed in the 2.4.3-pre2 and later patches.

Thanks for passing us the Tip, David! -- Heather


Re your Fortran answer (tag 15, iss 64)

Tue, 13 Mar 2001 17:04:00 +0000 (GMT)
duncan (D.C.Martin.2000 from Cranfield.ac.uk)

I read with interest about how g77 works. I plan on using it when I get a chance. The questioner would probably find it useful to check out www.fortran.com - it has links to many different Fortran products, services, and benchmark tests, and a lot of what is on there is relevant to/directly aimed at linux users. Many compilers seem to be aimed squarely at the linux market. I guess that is because of the popularity of Beowulf type clusters, but it's nice to know that even where almost everything is (visibly) written in C there is still room for Fortran.
Hope this helps
Cheers
Duncan
ps TAG is great. Keep it up.


Agenda Computing Challenges Palm

Thu, 15 Mar 2001 16:34:11 -0800 (PST)
Heather (The Editor Gal)

Is this press release true? Can somebody summarize how far the Linux-on-PDAs projects have gotten?

Handhelds.org has a great deal of information about putting Linux onto PDAs. Transvirtual's PocketLinux (their penguin is very cute - his whole tummy is a pocket protector) runs on iPaq, VTech's Helio, maybe others by now. The pocketlinux has to be put on by having a dev environment on another box, but this is no different than the first fellow who forcefed Linux onto his laptop across its plink cable or ethernet crossover. The result is operational without an external bootstrap, but varies in usability.

Certainly some complete OS bigots have tried to put Linux on their Palms.

Agenda may be the first to actually sell a PDA preloaded with Linux, and not designed for some other OS first, though.

And, their Linux environment has the usual PDA features, rather than trying to be X or a terminal. -- Heather

----- Forwarded message from Agenda Computing -----

Subject: Agenda Computing Challenges Palm Date: Fri, 9 Mar 2001 19:49:05 -0800 (PST)

The complete text of their Press Release can be found at http://www.agendacomputing.com/about/press20010309.html


Mailbag #62; Memory mystery

Tue, 20 Mar 2001 12:35:52 +0100
Frode Lillerud (frode.lillerud from c2i.net)

I know that Abit had a similar problem with their BH6 motherboard, Linux wouldn't show RAM over 64MB. They solved it by releasing a BIOS patch.

Yours sincerely
Frode Lillerud, Norway


mcad

Mon, 26 Mar 2001 09:29:15 -0800 (PST)
Heather (The Editor Gal)

Hello. I keep seeing the term "mechanical CAD", but am not sure of it's actual meaning. What is mechanical CAD and what differentiates it from CAD? Thanks. RES

This isn't really a question about Linux, but I'll toss in a potshot.

There are absolutely piles of CAD software available for Linux. Most of it appears to be for cirvuitboard description. That's not terribly useful for developing instructions to send to a metal lathe so a part can be cut. And both of these are very different from architectural CAD for designing building layouts.

I would guess that by saying "mechanical CAD" one could easily note that you meant the second kind.

"Linux Gazette...making Linux just a little more fun!"


Opera - a lightweight browser for Linux

By Matthias Arndt


Table Of Contents

Introduction

Currently, the Linux community lacks a stable and fast web browser.

Of course, there's Netscape but it's neither fast nor 100% stable. Netscape will crash sometimes, especially when you're downloading large files over a slow internet link. (I will refer to Mozilla as Netscape here because it still is a "Netscape"-like browser.)

There are several alternatives out there but they all lack features that are required by most (multimedia) webpages like Java, Javascript, Frame support, tables, CSS and even Flash. My personal opinion is that most of this is just trash, not really needed for a decent website except tables.

Ported from the Windows world, Opera seems to fill the gap. It's still not a full replacement for Netscape on the Linux platform, but it's very close to reaching this goal.

This article focuses on the advantages and disadvantages of Opera, its concepts, and finally a comparison to Netscape.

Screenlayout & Look'n'Feel

A picture is always a good starting point. Click on the link below to see a screenshot of Opera.

Opera - as seen when started, browsing Slashdot  [237 KB]

If you're used to the Windows version of Opera, you'll recognize that the screen layout is the same as in the Windows version.

The first thing that you'll recognize is a somewhat large banner containing advertisements. Opera is a commercial product so the try-before-you-buy version has a banner there. This banner can be controlled to show specific advertisements. But I do not recommend that because you'll lose some of your limited anonymity on the net.

The navigation buttons are familiar and most of them work the same way as in Netscape. However you'll notice the lack of a STOP button to cancel a transfer.

Opera uses some sort of multiple document interface. Contrary to Netscape, all document windows will be opened in the mainframe as subframes. You can choose between either a full view of a single document or you can see several subframes be open at the same time.

Just take a look at the two examples below....

Opera, two documents open at the same time, one is shown fullscreen   [210 KB]

Opera, two documents open at the same time, both shown in seperate subframes   [196 KB]

You'll notice that you have seperate buttons for each open document. This allows you to operate Opera even if theres no windowmanager running. Additional: frame switching is much faster compared to Netscape.

A very nice feature, borrowed from the MS IE, is the ability to switch to a real fullscreen mode. Press F11 and the current browser frame will be shown fullscreen, really filling the full screen, not only the subframe of the Opera window.

Disadvantages in this mode are that you can neither switch to other subframes nor use the forward and backward buttons.

You can customize the look'n'feel of Opera to the extend. Much more than Netscape. Display of documents is controlled by CSS. You may either use one from a web site on the net or supply your own. You can select a whole file to do this or you can customize color, font and size of the various objects as headers, paragraphs, etc.

Take a look at the customization dialog below:

customization dialog for applying personal CSS   [49 KB]

You cannot customize that all in Netscape. Pretty cool feature.

Bookmark Handling

Bookmark handling is very good in Opera. You can import your Netscape bookmarks and your KDE shotcuts (if you have some). A feature, imported from the Windows version, is the possibility to import MS IE bookmarks as well. But I guess, no real Linux user has a need for that, IMHO. One drawback is that the import is readonly. That's somewhat limiting but acceptable.

You can switch between a view with bookmarks or without. The default layout is close to the MS IE ones.

Opera, with imported Netscape bookmarks open   [246 KB]

Note that import of foreign bookmarks is done automatically.

Quality of browsing - what can Opera render?

Well, Opera renders almost any decent HTML code. Tables, Frames, CSS, all are no problem. In that way it has the full quality of Netscape, not found in many alternative browsers on the Linux platform.

In particular, the CSS support is even better than in Netscape. Just compare a site relying completely on CSS and optimized for MS IE. View this page with the MS IE, with Netscape and finally with Opera.
As far as I have noticed so far, Opera's output is closer to the MS IE than Netscape. But that may be subjective.

Opera has three or four features that it does not do properly:

But IMHO no one really needs Javascript, this one is really annoying. Flash is for multimedia freaks and Java, well if you need this, you can still use Netscape.

Opera is able to display PNG pictures, a feature not supported by most alternative browsers.

Opera in comparison to Netscape

Take a look at the following table, then select for yourself.

I haven't used the Konqueror from KDE2 yet - so I cannot give you an comparative overview of that one with Opera.

Opera Netscape
Cost free - but advertisement is shown free - for non-commercial use
Size average (statically linked)
small (dynamic linked)
big (statically linked only)
Speed startup is fast
document loading and rendering is fast
startup is very slow, even with much RAM
document loading and rendering is average
Rendering quality of text (compared to the Windows versions) average (at least with my font settings) average to bad (depends on the font sizes in CSS and fonts used)
Table support yes yes
Frames support yes yes
JavaScript support yes but incomplete yes
Java support no - but seems to be planned for the future yes
CSS support yes yes but incomplete
Stability rather good
sometimes crashes without a reason (at least on my system)
average to good

Customization

Opera is very customizable. You can select your own CSS style sheets to use, define shortcuts to search engines.

You can choose the Identity string as well. Using this feature you can claim that your're using the MS IE or Netscape, instead of Opera.

This might be useful on some sites that require the use of one of the big browsers out there.

Last but not least, the screenlayout, the positons of navigation and status bars, can be customized. You can even select to show an advanced navigation bar instead of the small default one.

Technical Notes & Downloading

Opera uses the QT2.2 library. However, it runs nicely without KDE.

Opera is available at www.opera.com.

You can choose between tar.gz, deb and rpm packages. These come either statically linked or dynamically linked.

A version for PowerPC Linux is available as well.

I suggest using the statically linked version. Although the packages are bigger, it is more likely that Opera runs.

Installation

Installation of Opera is easy.

The tar.gz archives come along with an install skript.

Just unpack the Opera archive to a temporary place and run install.sh in the directory,

I have no experience with the deb or rpm packages of Opera. Due to the nature of these formats, I suppose that both just install Opera and you can use it out of the box afterwards.

Conclusion

Opera is a fast and lightweight web browser. It has very good features and is able to render almost 90% of the webpages out there.

There are still some features missing or incomplete. At least, today, Opera is still not ready to be used standalone if you want Javascript, Java and multimedia stuff like Flash. But if you can live without these, you'll find that Opera can be a 100% replacement for Netscape.

Give it a try. Opera has many nice features not seen before in the Linux environment.


Copyright © 2001, Matthias Arndt.
Copying license
http://www.linuxgazette.com/copying.html
Published in Issue 65 of Linux Gazette, April 2001

"Linux Gazette...making Linux just a little more fun!"


Your Own Home Domain With ADSL

By Ray Chan


Note: Domain names and IP numbers in this article have been changed. I have no connection with myfakedomain.com and myhome.net--please do not send questions or complaints to them.

Acknowledgement

This article is a walk through the steps I did to host my own domain name at home. It is not a guide or tutorial about how to set up and host your domain. There are already lots of HOW-TOs and tutorials on that topic. However, this artice provides working example for your reference, and I've also included URLs to some really useful web sites.

Background

In late 2000, when everyone were talking or already using broadband, I was still using my Hayes 28.8kbps modem to surf the net. My reason is simple, none of the broadband provider provides fix I.P. address although they did provide unlimited usage plan. I have a few domains name registered and hosting at some ISP. The service of the web hosting companies are limiting to html, perl cgi, pop server and maybe mod_rewrite. They never provide SMTP, MySQL, PHP4. whatever useful or at a really high price. That's why I'm looking for a broadband provider willing to provides fix I.P. so that I can host my own web site and run whatever I want.

Thanks god. At Jan 2001, one of the broadband provider at my area annouced that they will provides fix I.P. with extra cost. It is really expensive but hey that's what I need. I'm willing to pay for any services that fit my needs. On the other hand, I can save a lot of butts from web hosting company where my domain names currently located. Why not dynamic I.P.? Yes dynamic I.P. may also do the same using some tricks with dynamic DNS as provided by no-ip, DynDNS... etc. but it is too annonying and not really good if you are going to host your own email server.

Planning the Network

OK I subscribed to the broadband service finally. It takes two weeks to arrange a technical guy to install the splitter and ADSL modem. Actually I can do it myself but they don't want me to. Anyway this is a good time to build the network and prepair for the high speed connection. Before actually building the network, it is better to think about the topology first. I make use of my spare old hardware and spent some money to build two linux box. One linux box will be the baston host running Apache web server, ftp server, email server and MySQL database server. The baston host will act as an exterior router routing traffic between the internet and the intranet. The other linux box will be the Intranet server hosting internal application and data. The intranet box will act as an interior router. Someone asked, why two linux box? Well, for security reason of course. Please refer to your technical books about firewalling for details explaination. Figure 1 shows the network diagram of my home network.

Since I got only one fixed IP, I'm not going to run any high traffic web site. Only one baston host may do the job well, since it is a basic and simple network. It is the solution for me, not neccessary for everyone who are reading this article. Again, think about your own plan.

Building the network

I downloaded and installed RedHat 7.0 to both of the linux boxes. Choose your own packages that sounds interest to you. It is fine for you to use other distribution. However, there were some essential components required in order to setup an internet server. Please refer to the HOW-TO at linuxdocs.org. Again this is not a tutorial. I strongly suggest the following HOW-TOs for this section:

  • ISP-Setup-ReadHat
  • DSL HOWTO for Linux

    And the following mini-HOWTOs:

  • Setting Up Your New Domain Mini-HOWTO
  • Home-Network-mini-HOWTO
  • IP-Subnetworking

    If you know nothing about what linux can do, you must read 'The Linux Networking Overview HOWTO'.

    Secure the baston host by packet filtering firewall using ipchains

    Ok now I got RedHat installed but the linux boxes were not protected yet. I need to setup firewall and routing table in order to protect the linux machines and forwarding packets from Internal network to extranet network. This is a really big job for home user, and me too. I did a lot of search at freshmeat.net, google and sourceforge. I tried a lot of free firewalling scripts and none of them provides good security and hard to modify. Yes I'm lazy to write my own filtering and routing rules. You are lucky. I found a really good firewall scripts @ ICEBERG. Their scripts are easy to modify and setup all the routing. I run their scripts on both of my linux machines and then I'm free to do other tasks now. Thanks again ICEBERG. Following is a list of useful documentation regarding firewalling and packet forwarding:

  • Firewall-HOWTO
  • IP-Masquerade-HOWTO
  • IPCHAINS-HOWTO

    If you wanna use Napster behind the firewall, you should read IPMasquerading+Napster mini-HOWTO

    Setup External DNS Server at baston host

    Although I'll use HAMMER NODE to host the DNS entry for my domain name, a working caching only nameserver is still required to run the linux box. Configuration files were shown below:

    /etc/named.boot
    /etc/named.conf
    /var/named/named.ca
    /var/named/named.local
    /var/named/named.myfakedomain.com
    /var/named/named.myhome.net
    /var/named/named.rev.3
    /var/named/named.rev.2

    Connecting to the ADSL modem

    Connecting the ADSL modem under linux is easy, just download the RPM of RP-PPPOE from Roaring Penguin Software Inc, install it and then run the adsl-setup, that's all. As easy as an window machine.

    Migrating domain name to baston host

    At this moment, the web server does not seems working yet. I fixed it by adding the line below to the /etc/httpd/conf/httpd.conf file:

    ServerName www.myfakedomain.com (for baston host)
    ServerName www.myhome.net (for Intranet Server)

    The web servers on both linux were up and running after a reboot. Now what's next? I started my favourite browser Netscape and did a search on my favourite search engine Google for a Free DNS server. Finally I reach HAMMER NODE. I was lucky that I could reached hn.org. They provides free services for both dynamic I.P. and static I.P. user. They have good and easy to use UI and manages to provides both reliable and stable service. I created a virtual domain mappings accounts and have the configuration like this:

    Rec FQDNRec TypeRec ValueDynDNSMX PrefCommands
    myfakedomain.comNSns1.hn.org00
    myfakedomain.comNSaux1.hn.org00
    www.myfakedomain.comCNAMEmyfakedomain.com00
    myfakedomain.comA202.xxx.xxx.xxx00
    mail.myfakedomain.comMX202.xxx.xxx.xxx00
    ns.myfakedomain.comNSmyfakedomain.com00
    mail.myfakedomain.comCNAMEmyfakedomain.com00
    ns.myfakedomain.comCNAMEmyfakedomain.com00

    After setup the DNS account from hn.org, I change the DNS entry, both of the primary and secondary server to the DNS server provided by hn.org from the domain registration company (usually register.com or whatever). It may take some times to get the DNS entry refresh.

    Wonderful! Now the DNS entry was refreshed and all request to www.myfakedomain.com will forward to my baston host. That's simple huh? Thanks for the great work of hn.org. For details about how to setup DNS entries, please refer to DNS-HOWTO.

    Because the machine connected to ADSL modem provide services for the public, that mean it will be accessed by anyone who have Internet access from anywhere. I need to restrict the access of various tcpd services for this machine for security reason. I edited the file /etc/hosts.allow and /etc/hosts.deny accordingly:

    /etc/hosts.allow

    ALL: 127.0.0.1
    in.telnetd: 192.168.2.2
    in.ftpd: 192.168.2.2
    sshd: 192.168.2.2 203.xxx.xxx.xxx

    /etc/hosts.deny

    ALL: ALL : spawn (echo Attempt from %h %a to %d at `date` | tee -a /xxx/xxx/tcp.deny.log | mail my@email.com )

    As shown from the above configuration files, all machines from internal network can telnet, ftp, ssh and sftp to the baston host. The address 203.xxx.xxx.xxx is the I.P. address of my office machine which is allowed to remote login to the baston host using ssh and transfer file to the baston host using sftp. Telnet and ftp to the baston host will never allow from machine outside the internal network because user name and password is transmit in plaintext format. It may be captured by hacker easily. HTTPD is not included in the above configuration file because HTTPD is not under controlled of INETD.

    Connect to the baston host safely using SSH

    Telnet and FTP is allowed to connect to the baston host from the internal network. SSH and SFTP must be used to connect from external network. Refer to the article 'Using ssh' from Linux Gazette about how to setup and usage of SSH. You must install and running SSHD in order to support SSH. SFTP can be download from http://enigma.xbill.org/sftp/. SFTP is easy to use and install, please refer to the readme from the web site.

    Setup the Intranet Server

    In order to protect the internal network, I disable all access from external network to my internal network:

    /etc/hosts.allow

    ALL: LOCAL 192.168.1.2 192.168.1.7

    /etc/hosts.deny

    ALL: ALL : spawn (echo Attempt from %h %a to %d at `date` | tee -a /xxx/xxx/tcp.deny.log | mail my@email.com )

    An email will be sent to my mailbox in case there are any activities attempt to connect to any prohibited services to both of my linux server.

    As shown from figure 1, all internal machines have a host name. You can use whatever host name and domain name for your internal network even the domain name is already registered at NIC, however, special care must be taken when setting up your own internal DNS server.

    Setting up intranet DNS server - named

    Again, please refer to the HOWTO or technical books about how to setup a DNS server. Following shows my configuration files of the DNS server running at the Intranet server:

    /etc/named.boot
    /etc/named.conf
    /var/named/named.ca
    /var/named/named.local
    /var/named/named.myhome.net
    /var/named/named.rev.1
    /var/named/named.rev.2

    More security issues

    Hackers are arounding you, only firewalling with packet filtering and controlling services access from hosts.allow/hosts.deny are never enough. A few security holes may discover everyday. You should subscribes to corresponding mailing list and upgrade your linux constantly. A few more articles and software about security is good and worth to introduce:

  • Security for the Home Network LG #46
  • Linux Firewall and Security Site
  • Mason - the automated firewall builder for Linux
  • Astaro AG (Great firewall linux distribution with web interface)
  • The Ethereal Network Analyzer
  • Nessus - The Security Scanner
  • Stunnel - Universal SSL Wrapper

    How about POP3 and SMTP server?

    POP3, as same as TELNET and FTP, transfer username and password in plaintext and is considered insecure. SPOP maybe setup to encrypt POP data. However, I don't want to store my personal email in any machine outside internal network including my office's workstation. So I'm not going to setup POP3 in the baston host. The reason not to allow SMTP because relaying mail is dangerous because spammer will make use of your relayed SMTP server to send their hateful spam mails. On the other hands, setting up a non-relayed SMTP server for yourself is meaningless because you cannot send mail from your SMTP server outside the network. I can simply login to my baston host using ssh and run pine to check and reply my message in a secure way.

    Subdomain for web server

    Wow, everything working now. I can host my web server, email server and ftp server at my home linux box. It rocks! Now I need a subdomain resume.myfakedomain.com to host my online resume. Just add the following lines to the /etc/httpd/conf/httpd.conf handles all the magic:

    RewriteEngine on
    ## Ignore www.myfakedomain.com
    RewriteCond %{HTTP_HOST} !^www\.myfakedomain\.com [NC]
    ## A directory with the name of the subdomain must exist
    RewriteCond %{DOCUMENT_ROOT}/%1 -d
    ## Add the requested hostname to the URI
    ## [C] means that the next Rewrite Rules uses this
    RewriteRule ^(.+) %{HTTP_HOST}/$1 [C]
    ## Translate abc.myfakedomain.com/foo to myfakedomain.com/abc/foo
    RewriteRule ^([a-z-]+)\.myfakedomain\.com/?(.*)$ http://www.myfakedomain.com/$1/$2 [L]

    Other useful configuration files

    /etc/hosts (baston host)

    127.0.0.1	localhost.localdomain 	localhost
    192.168.2.1	router.myhome.net	router
    192.168.2.2	gateway.myhome.net	gateway
    202.xxx.xxx.xxx	www.myfakedomain.com	www
    

    /etc/hosts (intranet gateway)

    127.0.0.1	localhost.localdomain 	localhost
    192.168.1.1	server.myhome.net	server
    192.168.1.2	devel.myhome.net 	devel
    192.168.1.3	php.myhome.net	php
    192.168.1.4	asp.myhome.net	asp
    192.168.1.7	be.myhome.net	be
    192.168.2.1	router.myhome.net	router
    192.168.2.2	gateway.myhome.net	gateway
    

    /etc/resolv.conf (baston host)

    search myfakedomain.com
    nameserver	127.0.0.1
    

    /etc/resolv.conf (intranet gateway)

    search	myhome.net
    nameserver	127.0.0.1
    

    Network Card Setting

    Ethernet port setting:

    More network configuration files:

    /etc/sysconfig/network (baston host)
    /etc/sysconfig/network-scripts/ifcfg-eth0 (baston host)
    /etc/sysconfig/network-scripts/ifcfg-eth1 (baston host)

    /etc/sysconfig/network (Intranet gateway)
    /etc/sysconfig/network-scripts/ifcfg-eth0 (Intranet gateway)
    /etc/sysconfig/network-scripts/ifcfg-eth1 (Intranet gateway)

    /etc/rc.d/rc.local (Both of the Baston host and Intranet gateway)

    TCP/IP setting summary

    Baston host
    Default Gateway:ppp0
    Nameserver:127.0.0.1
     
    Network interface:eth0
    I.P. Address:192.168.3.1
    Subnet mask:255.255.255.0
     
    Network interface:eth1
    I.P. Address:192.168.2.1
    Subnet mask:255.255.255.0

    Intranet Server
    Default Gateway:192.168.2.1
    Nameserver:127.0.0.1
     
    Network interface:eth0
    I.P. Address:192.168.1.1
    Subnet mask:255.255.255.0
     
    Network interface:eth1
    I.P. Address:192.168.2.2
    Subnet mask:255.255.255.0

    Workstations from Internal Network
    Default Gateway:192.168.1.1
    Nameserver:192.168.1.1
     
    Network interface:eth0
    I.P. Address:192.168.1.X
    Subnet mask:255.255.255.0

    Further setup and reading

    What if you want to access your internal machine running windowsz from the other network while maintaining security through the firewall? The answer is using Virtual Private Network (VPN) technology. Linux do support VPN in recent version. More details can be find at VPN HOWTO. If you have more than one domains and want to host at the same baston host, you may require special setting for your apache web server and sendmail server. The next version of this article will include the walkthrough of the VPN and virtual domain setup.

    If you have any suggestions or comments regarding this document, please feel free to contact me at rayxtra@hotmail.com.


    Copyright © 2001, Ray Chan.
    Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 65 of Linux Gazette, April 2001

    "Linux Gazette...making Linux just a little more fun!"


    HelpDex

    By Shane Collinge


    The first cartoon is a reference to the Cobalt Qube, a network server appliance that runs on Linux. forJon.jpg
    accessgone.jpg
    name.jpg
    0x88.jpg

    Shane's cartoon archive is available on his web site, http://www.ShaneCollinge.com/.


    Copyright © 2001, Shane Collinge.
    Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 65 of Linux Gazette, April 2001

    "Linux Gazette...making Linux just a little more fun!"


    Interview with Linux Today's Marty Pitts

    By Fernando Ribeiro Correa and Marcos Martins Manhães
    Originally published at OLinux


    Enjoy this interview with Marty Pitts, Managing Editor at Linux Today. He talks about Linux Today's evolution and the growth of its main subject, Linux operating systems.

    OLinux: Please introduce yourself. (career, education, hobbies, personal and professional achievements).

    Marty Pitts: My name is Marty Pitts.I worked in the nuclear industry for 13 years before joining Linux Today, in jobs ranging from Purchasing Agent to Network Admin. I like to ski in the winter, hike and camp in the summer and read SciFi in between.I also like to play around with the latest Linux distros.

    OLinux: How long have you been working and what are your responsibilities at Linux Today?

    Marty Pitts: When I became interested in using Linux at work, I started looking for a information about Linux online.One of the resources I came across was Linux Today. I liked that it was updated hourly. When I found news that they did not have, I started using their contrib form.After several months, the site owners: Dave and Dwight, asked if I would be interested in working as a volunteer on the site. Having become a Linux news junkie, I jumped at the chance.

    In the summer of 1999, Dave sent me an email asking what my employment situation was.It just so happened that at my current job, my boss of several year had just turned in his notice to quit.It was a good opportunity to think about a career change.How many people actually get a chance to work at what they love?

    I started working for Dave and Dwight in September of 1999 full time as the Managing Editor.About a month later, Dave and Dwight sold the Linux Today web properties, which included LinuxPR.com, to internet.com. I have been as a full time employee of internet.com ever since.

    OLinux: How's the site organized? Give us an idea of how the Linux Today works. How many people are involved?

    Marty Pitts: For the whole channel, which includes 14 web sites, there are approximately 9 full time editors and programmers.

    Right now there are two full time people who work on Linux Today, myself and Michael Hall.We also take care of LinuxPR and a couple of other sites in the Linux/Open Source channel.

    Michael lives on the east coast of the US, and I live in Washington state on the west coast.So naturally we break up the day, with Michael covering the first part of the day and then I come online later with a couple hours of overlap.

    OLinux: Can you describe Linux Today evolution since it began?

    Marty Pitts: Dave and Dwight were the ones that came up with the idea for Linux Today and they are the ones who successfully executed that idea.They were successful enough to attract the attention of internet.com.

    It started as a labor of love for Dave and Dwight.They wanted to provide a resource that people could use to find out what was going in the Linux/Open Source world.They started the site on September 30, 1998. A year later, they had both quit their daytime jobs to work full time on the site, they had been able to hire a full time editor, and they had posted over 10,000 stories. Currently we are right at 34,000 stories posted, just on Linux Today.

    After the sale of the site to internet.com, somethings changed and others remained surprisingly the same.Dave chose to leave and pursue other goals, Dwight stayed on and we worked to keep the site going. To replace Dave, who had done most of the site programming, Paul Ferris was hired.

    Paul, a great guy, started working on the programming side but still found time to write his column: Rant Mode Equals One.Currently we are using the second iteration of the site code, which Paul wrote, and we are about to roll out the third iteration of the code.It will provide increased flexibility so that the code will be able to be used across a variety of different sites, each with its own unique requirements.

    What stayed the same, during the transition, was the direction and focus of Linux Today. We were told to keep doing what we had been doing that had made Linux Today a popular site, which was a relief.

    Today we have a lot more original content than we used to. In addition, our focus is on making the whole Linux/Open Source channel work together well.

    OLinux: Are there companies sponsoring or maintaining Linux Today?

    Marty Pitts: Since Linux Today is owned by internet.com, they are the ones who pay for the maintenance of this and the other sites.

    OLinux: Is there any central control to avoid redundancy and improve editorial efforts?

    Marty Pitts: Yes. We have, as part of the backend to Linux Today, an Editorial Board that keeps track of who is working on what stories. In addition, we use email extensively, plus we have an IRC channel for quick communication.

    In spite of that, we occasionally will have a duplicate story go up.Which is why, sometimes you will see a message that says, 'This story has been unposted.'

    OLinux: How difficult is to present good content day by day? Besides the users' contribution, do you have any other content resource (agencies like Reuters, etc.)?

    Marty Pitts: Early in the week, Monday and Tuesday, it is usually very easy to find content to post.As the week progresses though, it can be a struggle to find good content and resist posting something that is just a rehash of a story that has already been covered.Weekends are more difficult since there is usually no news from the traditional sources. Since we like to have time off as well, we break up the weekend between the editors and we also future post some items so that they show up over a regular period of time.This way, we are able to take a break and our readers can find some fresh content.

    Our readers are a very valuable source of content.Without them providing links and suggestions, Linux Today would not be were it is today.

    We are able to find some relevant content elsewhere within the internet.com properties, which we use when available.

    OLinux: How do you see Linux Today in the Open Source world? What's the best contribution Linux Today has been giving to the Linux community during its existence?

    Marty Pitts: We see Linux Today as the place to stop if you want to know what is going on in the Linux/Open Source community today. We search out the events and news and bring it to one place so that our readers don't have to spend the time doing that search for themselves.

    Through the forums and story talkbacks, we help to facilitate discussions within the community and give our readers a place to react to the news of the day.

    I believe that Linux Today's greatest contribution is that we are able to raise the awareness of our readers about the events, good and bad, that are happening within and to the Linux/Open Source community.

    OLinux: What are the new features being developed for Linux Today? Can you detail the main currently projects?

    Marty Pitts: We are about to roll out redesigned site software that will provide a greater flexibility and robustness to all the sites on the channel.

    OLinux: What is your opinion about the growth of Linux in the enterprises? What about desktops, do you have a projection for the future?

    Marty Pitts: From my experience, Linux is infused into the enterprise deeper than anyone suspects. When a problem can be solved without having to ask for a new budget item, the guys/gals on the front lines will use what works. I see the projections by companies like Gartner and IDC and I have to laugh. They don't know how to properly measure the revolution that is taking place under their noses. Their methodology can't account for stealth deployments.

    The desktop is there already. Ease of use and graphical tools have come a long way in just the past year or two. I use Linux as my work environment, and for many like myself, Linux is already there.Just look at what we have available to us, DVD decoding and playback capability, the latest video, sound and networking hardware. The environments available are amazing as well. Even though I don't use KDE or GNOME (I use a pure 'Enlightenment' desktop), I have both of them on my system and use their apps.


    Copyright © 2001, Fernando Ribeiro Correa.
    Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 65 of Linux Gazette, April 2001

    "Linux Gazette...making Linux just a little more fun!"


    Internet Printing - Another Way

    By Graham Jenkins


    The Problem

    You are doing some work on your home PC, connected to your favorite ISP - and you decide you want to print a Word document on the high-speed color printer at your office. That printer is connected to the corporate LAN, but you can't talk to it using LPR or IPP because it is hidden behind the corporate firewall.

    You could perform a print-to-file operation, then email the file to somebody at your office, and get them to send it to the printer. But there are a few steps here - and it gets more complicated if there is a restriction on the length of email messages which can be passed through one of the servers along the way. You will then have to perform some sort of file-split operation and send the individual parts.

    Client Software

    The people who make Brother printers thought of all this, and developed a set of Windows printer drivers. These enable users to print directly to a designated email address. The print-job is automatically split into parts if necessary, and each part is base64-encoded prior to transmission. Users can also nominate an address for email confirmation.

    These Windows printer drivers (for Windows 95/98, and for Windows NT-4.0/2000) can be downloaded from the Brother website.

    Printer Capabilities

    What the Brother people expect users to do their printing on is, of course, a Brother printer - specifically, in this instance, one equipped with a network card able to accept, decode and re-assemble mail messages directed to it.

    But what if you wish to print on a printer from another manufacturer?

    Doing it in Software

    My first stab at this was a Korn-shell program to which appropriate incoming mail items were piped via a sendmail alias. The program used 'awk' to extract information such as job and part number, then decoded each such item into an appropriately named file in a designated directory.

    After receiving a part, the program marked it as "complete", then set an anti-simultaneity lock and went through a procedure to determine if all necessary parts had been received in full. If they had, it concatenated them in sequence, piped the result to the nominated printer, and deleted them.

    It was then that I started thinking: "What if there isn't enough room to store all the parts for all the jobs which may currently be arriving?" And: "How do the Brother people do it on a network card?"

    Doing it Without Local Storage

    The answer to my second question is: "They use a POP3 server!". The components of each job stay on that server until the network card determines that all necessary parts are available, at which stage it sucks them down and decodes them in sequence, sending the output to the printer mechanism, and requests their deletion from the server.

    So here's how it can be done on a Linux machine. The program has been written in Perl so that the NET::POP3 module can be used for easy access to a POP3 server. It has been tested on both NetBSD and Solaris machines, so it should work almost anywhere; all you'll have to change are the location of the Perl interpreter, the name used for 'awk', and the format of the 'lpr' command. [Text version of this listing.]

    #!/usr/bin/perl -w
    # @(#) BIPprint.pl      Acquires Brother-Internet-Print files from POP3 server
    #                       and passes them to designated printer(s). Small-memory
    #                       version.  Intended for invocation via inittab entry.
    #                       Graham Jenkins, IBM GSA, Feb. 2001. Rev'd: 17 Mar. 2001.
    
    use strict;
    use File::Basename;
    use Net::POP3;
    use Date::Manip;
    use IO::File;
    my $host="bronzeback.in.telstra.com.au";        # Same host and password for
    my $pass="MySecret";                            # each printer.
    my $limit=30*1024*1024;                         # Maximum bytes per print job.
    my ($printer,$awkprog);
    defined($ARGV[0]) || die "Usage: ", basename($0). " printer1 [ printer2 ..]\n";
    open(LOG,"|/usr/bin/logger -p local7.info -t ".basename($0)); autoflush LOG 1;
    $awkprog="";                            
    while (<DATA>) {$awkprog = $awkprog . $_};      # Build awk program for later,
    while (1) {                                     # then loop forever, processing 
      sleep 30;                                     # all printers in each pass, and
      foreach $printer (@ARGV) {process($printer);} # sleeping for 30 seconds
    }                                               # between each pass.
    
    sub process {
      my ($flag,$i,$j,$k,$l,$m,$allparts,$user,$pop,@field,@part,$count,$top15,
          $msgdate,$parsdate,$notify,$reply,%slot,$fh);
      $user = $_[0];
      $pop = Net::POP3->new($host);                 # Login to POP3 server and get
      $count = $pop->login($user,$pass) ;           # header plus 1st 15 lines
      $count = -1 if ! defined ($count) ;           # of each message. Use apop
      for ($i = 1; $i <= $count; $i++ ) {           # instead of login if server
        $top15=$pop->top($i,15) ;                   # supports it.
        if ($top15) {                       
          $msgdate = ""; $notify="None"; $reply="";
          for ($j = 0; $j < 99; $j++ ) {
            if (@$top15[$j]) {                      # Use arrival-date on POP3
              if($msgdate eq "") {                  # server to ascertain age of
                (@field) = split(/;/,@$top15[$j]);  # message; if it is stale,
                if ( defined($field[1])) {          # delete it and loop for next.
                  $parsdate=&ParseDate($field[1]);  # (Search for semi-colon
                  if( $parsdate ) {                 # followed by valid date.)
                    $msgdate="Y";
                    if(&Date_Cmp($parsdate, &DateCalc("today","-3 days") ) lt 0 ) {
                      print LOG "Stale msg: $user $parsdate\n";
                      $pop->delete($i);
                      goto I;                       # If POP3 server does
                    }                               # automatic message expiration
                  }                                 # this entire section can be
                }                                   # omitted.
              }
              (@field) = split(/=/, @$top15[$j]);
              if ( defined($field[0]) ) {   
                if ($field[0] eq "BRO-NOTIFY") {chomp $field[1];$notify=$field[1];}
                if ($field[0] eq "BRO-REPLY")  {chomp $field[1];$reply =$field[1];}
                if ( $field[0] eq "BRO-PARTIAL" ) { # Comment above line to
                  ( @part )=split("/", $field[1]);  # prevent mail notification.
                  chomp $part[1];           
                }
                if ( $field[0] eq "BRO-UID" ) {     # Determine print-job and part
                  chomp $field[1];                  # thereof contained in message.
                  $slot{$field[1]."=".$part[0]} = $i ;
                  $allparts = "Y";                  # As we see each message, check
                  for ($k=1;$k<=$part[1];$k++) {    # whether we have all parts.
                    $allparts = "N" if ! defined($slot{$field[1]."=".$k}) ; 
                  }
                  if ( $allparts eq "Y" ) {         # Print and delete all parts.
                    print LOG "$field[1] $part[1] => $user\n";
                    if(($notify ne "None") && ($reply ne "")) {system 
                      "echo Print Job Received, $part[1] pcs|Mail -s$user $reply";}
                    $fh=new IO::File "|awk '{$awkprog}' Limit=$limit |lpr -P $user";
                    for ($k = 1;$k<=$part[1];$k++) {
                      $pop->get($slot{$field[1]."=".$k},$fh) ;
                      $pop->delete($slot{$field[1]."=".$k}) ;
                    }                               # If there is enough filespace,
                    $fh->close;                     # pipe awk output thru gzip to
                  }                                 # a temporary file, then print
                }                                   # it and delete all parts; this
              }                                     # caters for connection failure.
            }                       
          }                                         # The awk program here-under
        }                                           # is used to extract parts from
    I:}                                             # a file containing multiple
      $pop->quit() if ($count >= 0);                # parts and feed each of them
    }                                               # through a decoder to stdout.
    __DATA__
    if( Flag == 2 ) {
        Size=Size+length
        if(length == 0) { Flag=0; close("mmencode -u 2>/dev/null") }
        else if(Size<=Limit*4/3) print $0 |"mmencode -u 2>/dev/null" }
      if( Flag == 1 ) if(length == 0) Flag=2
      if( Flag == 0 ) if($1 ~ /^Content-Transfer-Enc/) if($NF == "base64") Flag=1
    

    Program Walk-Through

    The program builds a small 'awk' program for later use; then, for each printer declared on it's command line, it accesses a mailbox of the same name and examines each message therein. If a message is stale, it is deleted. Otherwise the contents of some Brother-specific lines are extracted; these indicate whether email notification is required, and which part of which job is contained in the message.

    If, during examination of a message, it is determined that all the parts of its corresponding job have been seen in the mailbox, an email notification is generated if required, and the parts are extracted in sequence and piped via the 'awk' program (which decodes each part as it arrives) to an appropriate printer command. Each part is deleted as soon as it has been processed in this manner.

    Ideally, we should wait until success (or other) notification of print submission was obtained before performing the email and deletion tasks; however, as noted in the listing, this requires some local storage. In a like vein, whilst the Brother client software allows selection of email notification for several different conditions, we send notification of job submission unless "None" has been selected.

    Concluding Remarks

    This program contains a password, so it should be readable only by the user who will execute it. No special privileges are required for execution, and your entry for it in /etc/inittab should look something like:

    bi:345:respawn:su - nobody -c "/usr/local/bin/BIPprint.pl lp1 lp2 >/dev/null 2>&1"

    If you have read this far, you are probably saying: "OK, so the program doesn't need much local storage - but it sends its output to a print spooler! How bad is that?" If the size of your spool area is of concern, you can use something like 'netcat' or 'hpnpout' to send the job directly to a printer port instead of spooling it. Or you may be able to pipe your job through an FTP connection to your printer. If you do bypass the spooler in this fashion, you should use a separate instance of the program for each printer.

    It's not rocket science, and there's no user-authentication or content-encryption. But it may make your life a little easier. Enjoy!


    Copyright © 2001, Graham Jenkins.
    Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 65 of Linux Gazette, April 2001

    "Linux Gazette...making Linux just a little more fun!"


    Parallel Processing on Linux with PVM and MPI

    By Rahul U. Joshi


    This article aims to provide an introduction to PVM and MPI, two widely used software systems for implementing parallel message passing programs. They enable us to use a group of heterogeneous UNIX/LINUX computers connected by a network as a single machine for solving a large problem.

    1. Introduction to Parallel Processing

    Parallel processing is a form of computing in which a number of activities are carried out concurrently so that the effective time required to solve the problem is reduced. In the previous days, parallel processing was used for such thing as large scale simulations (e.g. molecular simulations, simulation of the explosion of an atomic bomb etc), solving large number crunching and data processing problems (e.g. compiling the census data) etc. However, as the cost of hardware is decreasing rapidly, parallel processing is being uses more and more in routine tasks. Multiple processor servers have been in existence for a long time. Parallel processing is also used in your own PC too. For example, a graphics processor working along with the main processor to render graphics on your monitor is also a form of parallel processing.

    However, apart from the hardware facilities for parallel processing, some software support too is required so that we can run the programs in parallel and coordinate their execution. Such a coordination is necessary due to the dependencies of the parallel programs on one other. This will become clearer when we work through an example. The most widely used method to achieve such coordination is message passing in which the programs coordinate their execution and in general communicate with each other by passing message's to one other. So, for example, a program may tell another program, ``Ok! Here is the intermediate result you need to proceed.'' If all this sounds too abstract, lets proceed with a very simple example.

    2. A Very Simple Problem

    In this section, we will consider a very simple problem and consider how we can use parallel processing to speed up its execution. The problem is to find the sum of a list of integers stored in an array. Let us say that there are 100 integers stored in an array say items. Now, how do we parallelize this program? That is, we must first find out a way in which this problem can be solved by a number of programs working concurrently. Many a times, due to data dependencies, parallelization becomes a difficult problem. For example, if you want to evaluate (a + b) * c, which involves two operations, we cannot do them concurrently, the addition must be done before the multiplication. Fortunately, for the problem that we have chosen, parallelization is easy. Suppose that 4 program or processors will be working simultaneously to solve the addition problem. Then the simplest strategy would be to break the array items into 4 parts and have each program process one part. Thus the parallelization of the problem is as follows:

    1. Four programs say P0, P1, P2 and P3 will solve the problem.
    2. P0 will find the sum of array elements items[0] to items[24]. Similarly, P1 will find the sum of items[25] to items[49], P2 items[50] to items[74] and P3 items[75] to items[99].
    3. After these programs have executed, there must be some other program to find the sum of the 4 results obtained and give the final answer. Also, the elements of the array items are not known to the programs P0 to P3 and hence some program must tell these programs the values of the elements. Thus, apart from P0 to P3, we will require one more program that distributes data, collects results and coordinates execution. We call such a program as master and the programs P0 to P3 as slaves and this organization as the master - slave paradigm.

    With this organization in mind, let us write the algorithms for the master and the slave programs.


    /* Algorithm for the master program */
    initialize the array `items'.
    
    /* send data to the slaves */
    for i = 0 to 3
        Send items[25*i] to items[25*(i+1)-1] to slave Pi
    end for
    
    /* collect the results from the slaves */
    for i = 0 to 3
        Receive the result from slave Pi in result[i]
    end for
    
    /* calculate the final result */
    sum = 0
    for i = 0 to 3
        sum = sum + result[i]
    end for
    
    print sum
    

    The algorithm for the slave can be written as follows.
    /* Algorithm for the slave program */
    
    Receive 25 elements from the master in some array say `items'
    
    /* calculate intermediate result */
    sum = 0
    for i = 0 to 24
        sum = sum + items[i]
    end for
    
    send `sum' as the intermediate result to the master
    

    3. Implementing with PVM

    Now that the basic algorithm has been designed, let us now consider how we can implement it. What hardware shall we run this program on? Clearly, very few of us have access to special machines designed to run parallel programs. However, no special hardware requirements are there in order to implement this program. A single computer or a group of interconnected computers will do, thanks to PVM, a software system that enables us to use interconnected computers for parallel program execution. PVM stands for Parallel Virtual Machine. It enables you to create number of programs or processes that run concurrently on same or different machines and provided functions with which you can pass messages among the processes for coordination. Even if you have a single computer, PVM will work on it, although there will be no ``real'' parallel processing as such. However, for learning purpose, that should be fine. Later on I will describe how to do ``real'' parallel processing using the PVM.

    In order to use the PVM system, you need to install the PVM software on your Linux system. In case you are using Red Hat Linux, then the RPM package for PVM is included on the CD, so that you can install it as you normally install other packages. Assuming that you have installed PVM system on your machine, create the following directories(s) in your home directory: ~/pvm3/bin/LINUX/. Why ? Because PVM requires that some of the executables you create be copied in this directory. Once you have done this, your setup is ready. Test this by giving the command pvm on the prompt. This will start the PVM Console from which you can give commands to the PVM system and query status information. If everything is set OK, you will see the pvm> prompt. Here enter the command conf. The output should look something like this.

    pvm> conf
    conf
    1 host, 1 data format
                        HOST     DTID     ARCH   SPEED       DSIG
                   joshicomp    40000    LINUX    1000 0x00408841
    

    What does this mean? The PVM System allows you to consider a group of interconnected LINUX system to be viewed as a ``virtual'' computer having much higher computing capacity than the individual machines. Thus, PVM will distribute the processes among a number of computers. However, by default, PVM considers that only the host that you are working on is to be included in the PVM machine, i.e. all processes you create will be scheduled to run on the same host. The conf command shows what hosts or nodes are in the PVM. Currently, there is only one. Later on, we will see how to add more hosts. Presently, exit the PVM console by giving the command halt

    3.1 A Demonstration Program

    Now that you are ensured that the PVM system has been properly installed, let us see how to write the programs. Programs for the PVM system can be written in both FORTRAN and C. We will be using the C language. To use the PVM system, you include some calls to the PVM functions in your C program along with the other statements and link the PVM library with your programs. To get you started with PVM, let us write a simple program in which there will be a master and a slave. The master will send the slave some string, which the slave will convert to upper case and send back to the master. The master and the slave programs are given as follows. To compile the programs, give the command make -f makefile.demo.

    [Click here for a tar file containing the program listings.]


          1 /* -------------------------------------------------------------------- *
          2  * master_pvm.c                                                         *
          3  *                                                                      *
          4  * This is the master program for the simple PVM demonstration.         *
          5  * -------------------------------------------------------------------- */
          6 #include <stdio.h>
          7 #include <stdlib.h>
          8 #include <pvm3.h>           /* declares PVM constants and functions */
          9 #include <string.h>
            
         10 int main()
         11 {
         12     int mytid;              /* our task ID          */
         13     int slave_tid;          /* task ID of the slave */
         14     int result;
         15     char message[] = "hello pvm";
         16     
         17     /* enroll ourselves into the PVM system and get our ID */
         18     mytid = pvm_mytid();
            
         19     /* spawn the slave */
         20     result = pvm_spawn("slave_pvm", (char**)0, PvmTaskDefault, 
         21                         "", 1, &slave_tid);
            
         22     /* check if the slave was spawned successfully          */
         23     if(result != 1)
         24     {
         25         fprintf(stderr, "Error: Cannot spawn slave.\n");
            
         26         /* clean up and exit from the PVM system            */
         27         pvm_exit();
         28         exit(EXIT_FAILURE);
         29     }
            
         30     /* initialize the data buffer to send data to slave     */
         31     pvm_initsend(PvmDataDefault);
            
         32     /* ``pack'' the string into the data buffer             */
         33     pvm_pkstr(message);
            
         34     /* send the string to the slave with a message tag of 0 */
         35     pvm_send(slave_tid, 0);
            
         36     /* wait and receive the result string from the slave    */
         37     pvm_recv(slave_tid, 0);
            
         38     
         39     /* ``unpack'' the result from the slave                 */
         40     pvm_upkstr(message);
            
         41     /* show the result from the slave                       */
         42     printf("Data from the slave : %s\n", message);
            
         43     /* clean up and exit from the PVM system                */
         44     pvm_exit();
         45     
         46     exit(EXIT_SUCCESS);
         47 } /* end main() */
            
         48 /* end master_pvm.c */
    

          1 /* -------------------------------------------------------------------- *
          2  * slave_pvm.c                                                          *
          3  *                                                                      *
          4  * This is the slave program for the simple PVM demonstration           *
          5  * -------------------------------------------------------------------- */
          6 #include <stdio.h>
          7 #include <ctype.h>
          8 #include <stdlib.h>
          9 #include <pvm3.h>
            
         10 #define MSG_LEN     20
         11 void convert_to_upper(char*);
            
         12 int main()
         13 {
         14     int mytid;
         15     int parent_tid;
         16     char message[MSG_LEN];
            
         17     /* enroll ourselves into the PVM system         */
         18     mytid = pvm_mytid();
            
         19     /* get the task ID of the master                */
         20     parent_tid = pvm_parent();
            
         21     /* receive the original string from master      */
         22     pvm_recv(parent_tid, 0);
         23     pvm_upkstr(message);
            
         24     /* convert the string to upper case             */
         25     convert_to_upper(message);
            
         26     /* send the converted string to the master      */
         27     pvm_initsend(PvmDataDefault);
            
         28     pvm_pkstr(message);
         29     pvm_send(parent_tid, 0);
            
         30     /* clean up and exit from the PVM system        */
         31     pvm_exit();
         32     
         33     exit(EXIT_SUCCESS);
         34 } /* end main() */
            
         35 /* function to convert the given string into upper case */
         36 void convert_to_upper(char* str)
         37 {
         38     while(*str != '\0')
         39     {
         40         *str = toupper(*str);
         41         str++;
         42     }
         43 } /* end convert_to_upper() */
            
         44 /* end slave_pvm.c */
    

          1 # Make file for the demo PVM program
            
          2 .SILENT :
          3 # paths fro PVM include files and libraries
          4 INCDIR=-I/usr/share/pvm3/include
          5 LIBDIR=-L/usr/share/pvm3/lib/LINUX
            
          6 # link the PVM library
          7 LIBS=-lpvm3
          8 CFLAGS=-Wall
          9 CC=gcc
         10 TARGET=all
            
         11 # this is where the PVM executables go
         12 PVM_HOME=$(HOME)/pvm3/bin/LINUX
            
         13 all : $(PVM_HOME)/master_pvm $(PVM_HOME)/slave_pvm
            
         14 $(PVM_HOME)/master_pvm : master_pvm.c
         15     $(CC) -o $(PVM_HOME)/master_pvm master_pvm.c $(CFLAGS) $(LIBS) \
         16           $(INCDIR) $(LIBDIR)
            
         17 $(PVM_HOME)/slave_pvm : slave_pvm.c
         18     $(CC) -o $(PVM_HOME)/slave_pvm slave_pvm.c $(CFLAGS) $(LIBS) \
         19           $(INCDIR) $(LIBDIR)
    

    Once your programs have been compiled, you must copy them into the ~/pvm3/bin/LINUX directory. (The makefile does it by default). Now to run the programs, you must first start the PVM system. To do this give the command pvm to start the PVM Console. Now at the pvm> prompt, type quit. The output will be as follows:

    pvm> quit
    quit
    
    Console: exit handler called
    pvmd still running.
    
    Notice the last line, indicating that the PVM daemon (pvmd) is still running. To run the PVM programs, you need to run the PVM daemon which manages the exchange of messages and that what we are doing here. Once the PVM daemon is running, you can run the program by the following commands:
    [rahul@joshicomp rahul]$ cd ~/pvm3/bin/LINUX/
    [rahul@joshicomp LINUX]$ ./master_pvm
    Data from the slave : HELLO PVM
    [rahul@joshicomp LINUX]$
    

    Notice that the string is now in upper case as expected.

    3.2 Explanation of the program

    In this section, we will see exactly how this program works. First of all to use PVM function, you need to include a header file pvm3.h in your programs. This is done in line 8 of master_pvm.c and in line 9 of slave_pvm.c. Also when compiling the programs, you need to link it with the PVM library. This is done by specifying the -lpvm3 option to the compiler, as done in line 7 of makefile.demo. Also, you need to specify to the compiler the paths of header and library files, as is done on lines 4 and 5 of the makefile.

    In the master program, we first get the task ID of the master by calling the PVM function pvm_mytid(). The PVM system assigns each process a unique 32 bit integer called as its task ID in the same way as Linux assigns each process a process ID. The task ID helps us identify the process with which we need to communicate. However, the master does not uses its task ID (stored in mytid) ever. Our intention here is just to call the function pvm_mytid(). This function enrolls the process into the PVM system and generates a unique task ID for the process. If we do not explicitly enroll the process, PVM automatically enrolls our process on the first call to any PVM function. Next we use pvm_spawn() to create the slave process. The first parameter, "slave_pvm" is the name of the executable for the slave. The second parameter is the arguments that you wish to the pass to the slaves (similar to argv in normal C). Since we do not want to send any arguments, we set this value to 0. The third parameter is a flag with which we can control how and where PVM starts the slave. Since we have only a single machine, we set this flag to PvmTaskDefault, specifying PVM to use default criteria while spawning the slave. The fourth parameter is the name of the host or the architecture on which we wish to run the program and here it is kept empty. It is used to specify the host or the architecture when the flag is other than PvmTaskDefault.The fifth parameter specifies the number of slaves to spawn and the sixth parameter is a pointer to an array in which the IDs of the slaves will be returned. This function returns the number of slaves actually spawned which we check for correctness.

    A message in PVM consists of basically two parts, the data and a tag that identifies the type of the message. The tag helps us distinguish between different messages. For example, in the addition example, which we are going to implement, suppose that you are expecting that each slave will send to the master an integer which is the sum of the elements it added. It is also quite possible that some slave may encounter some error and may want to send the master an integer which indicates the error code. How does the master distinguish whether an integer it received from the slave is an intermediate result or an error code? This is where tags come in picture. You may assign the message for intermediate result a tag say MSG_RESULT which you will #define in some header file and a tag say MSG_ERROR for the message indicating error. The master will then look at the message tags to decide whether the message contains intermediate result or error.

    To send a message, you first need to ``initialize'' the send buffer. This is done by calling the pvm_initsend() function. The parameter to this function specifies the ``encoding'' scheme to be used. When we want to exchange data between machines with different architectures (like say between a Pentium machine and a SPARC Workstation) then we need to encode the data at the sending end and decode at the receiving end so that data is properly delivered. The parameter to pvm_initsend() specifies the encoding scheme to be used. The value PvmDataDefault specifies an encoding scheme which enables data to be safely exchanged between heterogeneous architectures. Once the buffer has been initialized, we need to put data into the buffer and encode it. In our case, the data is a string, so we use the function pvm_pkstr() to ``pack'' i.e. encode and put the data into the buffer. If we had to send an integer, there is a different function pvm_pkint(). Similarly, there are functions for other data types. Once the data is packed, we call pvm_send() to send the message. The first argument is the ID of the process to which the message is to be sent and the second argument is the message tag. Since there is only one type of message here, we set the tag to 0.

    Once the data is sent to the slave, the slave will process it and return it to the master as we shall see. So we now call pvm_recv() to receive the data from the slave. Again, the parameters are the task ID from which the message is expected and the tag of the expected message. If the desired message has not yet been sent, this function waits and does not return. Thus, in effect, the master is now waiting for the slave to process the data. Once the message arrives, the data is still in the receive buffer. It needs to be ``unpacked'' i.e decoded to get the original message. This decoding is done by the pvm_upkstr() function. We then display the processes string.

    Before the PVM program exits, it must tell the PVM system that it is leaving the PVM system so that resources occupied by the process can be released. This is done by calling the pvm_exit() function. After that, the master exits.

    The slave program is easy to understand. First it finds the task ID of the master (which is also its parent as the master spawned the slave) by calling the function pvm_parent(). It then receives the message string from the master, converts it to uppercase and send the resulting string to the master.

    3.3 The Addition Program

    Now that you know some basics of a PVM program, let us implement the addition algorithm we developed using PVM. There will be one master and 4 slaves. The master will first spawn 4 slaves and send each one their part of data. The slaves will add the data and send the results to the master. Thus, two types of messages are exchanged, one when the master send data to slaves, for which we will use the tag MSG_DATA and the other when the slaves send results to master, for which we will use the tag MSG_RESULT. The rest is simple. The master and the slave programs are given below.


          1 /* -------------------------------------------------------------------- *
          2  * common.h                                                             *
          3  *                                                                      *
          4  * This header file defines some common constants.                      *
          5  * -------------------------------------------------------------------- */
          6 #ifndef COMMON_H
          7 #define COMMON_H
        
          8 #define NUM_SLAVES      4                   /* number of slaves     */
          9 #define SIZE            100                 /* size of total data   */
         10 #define DATA_SIZE       (SIZE/NUM_SLAVES)   /* size for each slave  */
        
         11 #endif
         12 /* end common.h */
    

          1 /* -------------------------------------------------------------------- *
          2  * tags.h                                                               *
          3  *                                                                      *
          4  * This header file defines the tags that will be used for messages.    *
          5  * -------------------------------------------------------------------- */
          6 #ifndef TAGS_H
          7 #define TAGS_H
        
          8 #define MSG_DATA            101     /* data from master to slave    */
          9 #define MSG_RESULT          102     /* result from slave to master  */
        
         10 #endif
        
         11 /* end tags.h */
    

      1 /* -------------------------------------------------------------------- *
      2  * master_add.c                                                         *
      3  *                                                                      *
      4  * Master program for adding the elements of an array by using PVM      *
      5  * -------------------------------------------------------------------- */
      6 #include <stdio.h>
      7 #include <stdlib.h>
      8 #include <pvm3.h>           /* PVM constants and declarations   */
      9 #include "tags.h"           /* tags for messages                */
     10 #include "common.h"         /* common constants                 */
        
     11 int get_slave_no(int*, int);
        
     12 int main()
     13 {
     14     int mytid;
     15     int slaves[NUM_SLAVES]; /* array to store the task IDs of slaves    */
     16     int items[SIZE];        /* data to be processes                     */
     17     int result, i, sum;
     18     int results[NUM_SLAVES];    /* results from the slaves              */
        
     19     /* enroll into the PVM system   */
     20     mytid = pvm_mytid();
        
     21     /* initialize the array `items' */
     22     for(i = 0; i < SIZE; i++)
     23         items[i] = i;
        
     24     /* spawn the slaves             */
     25     result = pvm_spawn("slave_add", (char**)0, PvmTaskDefault,
     26                        "", NUM_SLAVES, slaves);
        
     27     /* check if proper number of slaves are spawned     */
     28     if(result != NUM_SLAVES)
     29     {
     30         fprintf(stderr, "Error: Cannot spawn slaves.\n");
     31         pvm_exit();
     32         exit(EXIT_FAILURE);
     33     }
        
     34     /* distribute the data among the slaves     */
     35     for(i = 0; i < NUM_SLAVES; i++)
     36     {
     37         pvm_initsend(PvmDataDefault);
     38         pvm_pkint(items + i*DATA_SIZE, DATA_SIZE, 1);
     39         pvm_send(slaves[i], MSG_DATA);
     40     }
        
     41     /* receive the results from the slaves      */
     42     for(i = 0; i < NUM_SLAVES; i++)
     43     {
     44         int bufid, bytes, type, source;
     45         int slave_no;
     46         
     47         /* receive message from any of the slaves       */
     48         bufid = pvm_recv(-1, MSG_RESULT);
        
     49         /* get information about the message            */
     50         pvm_bufinfo(bufid, &bytes, &type, &source);
     51         
     52         /* get the slave number that sent the message   */
     53         slave_no = get_slave_no(slaves, source);
        
     54         /* unpack the results at appropriate position   */
     55         pvm_upkint(results + slave_no, 1, 1);
     56     }
        
     57     /* find the final result            */
     58     sum = 0;
     59     for(i = 0; i < NUM_SLAVES; i++)
     60         sum += results[i];
        
     61     printf("The sum is %d\n", sum);
        
     62     /* clean up and exit from the PVM system    */
     63     pvm_exit();
        
     64     exit(EXIT_SUCCESS);
     65 } /* end main() */
     66         
     67 /* function to return the slave number of a slave given its task ID */
     68 int get_slave_no(int* slaves, int task_id)
     69 {
     70     int i;
        
     71     for(i = 0; i < NUM_SLAVES; i++)
     72         if(slaves[i] == task_id)
     73             return i;
        
     74     return -1;
     75 } /* end get_slave_no() */
        
     76 /* end master_add.c */
    
    

      1 /* -------------------------------------------------------------------- *
      2  * slave_add.c                                                          *
      3  *                                                                      *
      4  * Slave program for adding elements of an array using PVM              *
      5  * -------------------------------------------------------------------- */
      6 #include <stdlib.h>
      7 #include <pvm3.h>
      8 #include "tags.h"
      9 #include "common.h"
        
     10 int main()
     11 {
     12     int mytid, parent_tid;
     13     int items[DATA_SIZE];           /* data sent by the master  */
     14     int sum, i;
     15     
     16     /* enroll into the PVM system       */
     17     mytid = pvm_mytid();
        
     18     /* get the task ID of the master    */
     19     parent_tid = pvm_parent();
        
     20     /* receive the data from the master */
     21     pvm_recv(parent_tid, MSG_DATA);
     22     pvm_upkint(items, DATA_SIZE, 1);
        
     23     /* find the sum of the elements     */
     24     sum = 0;
     25     for(i = 0; i < DATA_SIZE; i++)
     26         sum = sum + items[i];
        
     27     /* send the result to the master    */
     28     pvm_initsend(PvmDataDefault);
     29     pvm_pkint(&sum, 1, 1);
     30     pvm_send(parent_tid, MSG_RESULT);
        
     31     /* clean up and exit from PVM       */
     32     pvm_exit();
     33     
     34     exit(EXIT_SUCCESS);
     35 } /* end main() */
    
    

      1 # Make file for the PVM program for addition - makefile.add
        
      2 .SILENT :
      3 # paths fro PVM include files and libraries
      4 INCDIR=-I/usr/share/pvm3/include
      5 LIBDIR=-L/usr/share/pvm3/lib/LINUX
        
      6 # link the PVM library
      7 LIBS=-lpvm3
      8 CFLAGS=-Wall
      9 CC=gcc
     10 TARGET=all
        
     11 # this is where the PVM executables go
     12 PVM_HOME=$(HOME)/pvm3/bin/LINUX
        
     13 all : $(PVM_HOME)/master_add $(PVM_HOME)/slave_add
        
     14 $(PVM_HOME)/master_add : master_add.c common.h tags.h
     15     $(CC) -o $(PVM_HOME)/master_add master_add.c $(CFLAGS) $(LIBS) \
     16           $(INCDIR) $(LIBDIR)
     17   
     18 $(PVM_HOME)/slave_add : slave_add.c common.h tags.h
     19     $(CC) -o $(PVM_HOME)/slave_add slave_add.c $(CFLAGS) $(LIBS) \
     20          $(INCDIR) $(LIBDIR)
    

    Let us consider the slave program first, because it is simple. The slave receives the 25 array elements from the master in the array items, finds their sum and sends the result to the master with the message tag as MSG_RESULT. Now consider the master. We define an array slaves of size NUM_SLAVES which will store the task ID's of the slaves spawned by the parent. There is another array results in which the results from the slaves are stored. The master first initializes the array items and then spawns the slaves. After that it distributes the data among the slaves. In the call to pvm_pkint() on line 38, the first parameter is the pointer to the array in which the integers are stored, the second is the number of integers to pack and the third is the ``stride.'' Stride means how many elements to skip when packing. When it is 1, consecutive elements are packed. When it is 2, PVM will skip 2 elements when packing with the result that all even numbered elements (0, 2, 4 ...) will be packed. Here we keep its value as 1.

    Once the data has been distributed among the slaves, the master has to wait till the slaves return the intermediate results. One possibility when accepting the results is that the master will first collect the result from slave 0 (i.e slave whose task ID is stored in slave[0]), then from slave 1 and so on. However, this may not be an efficient approach. For example, it may be that slave 0 is working on a slower machine than slaves 1, 2 and 3. In that case, since the master is waiting from slave 0, the results from slaves 1, 2 and 3 are yet to be collected even though the calculations are completed. In this case it may be fine, but consider the situation in which the slave, when finished doing one job is given another job. In that case, we would like to give a slave its next job immediately after it has completed its current job. Thus, the master must be in a position to respond messages from any of the slaves. This is what is being done here.

    In the call to pvm_recv() on line 48, we know that the first parameter is the task ID of the message source. If this value is kept -1, it signifies a wild card i.e. messages from any process with message tag MSG_RESULT will be received by the master. The received message along with some control information is stored in a buffer called as active receive buffer. The call returns a unique ID for this buffer. Now, we want to know who is the sender of the message so that we can assign the message data to the appropriate element of the array results. The function pvm_bufinfo() returns information about the message in the buffer, such as the message tag, the number of bytes and the senders task ID. Once we have the senders task ID, we set the appropriate element of the results array to the integer sent by the slave. The rest of the program should be easy to understand.

    3.4 Working with PVM

    In case you are interested, you can think of some problems for which you can write parallel programs. Many a times, due to bugs etc., you may need to clean up the state of the things before starting. The PVM Console provides with the command halt that kills the PVM daemon. Then all the PVM processes will halt or you can halt them with the Linux kill command. In case you have a network of Linux machines interconnected by say a LAN, then you can also do ``real'' parallel processing. First of all, install PVM on all the hosts you wish to use and then use the add command in the PVM Console to add hosts to the virtual machine. Then PVM will schedule some of the processes to run on these hosts, so that real parallel processing is achieved.

    4. Implementing with MPI

    We have seen in the previous section the implementation of the addition program using the PVM. Now let us consider another approach that can be used in developing parallel programs. This approach is using the MPI library. MPI stands for Message Passing Interface. It is a standard developed to enable us to write portable message passing applications. It provides functions for exchanging messages and many other activities as well. It must be noted that unlike PVM which is a software system, MPI is a standard, so that many implementations of the MPI standard exist. We will use an implementation of MPI called LAM which stands for Local Area Multicomputer. It is also available on the Red Hat Linux CD as an RPM package, so installation may not be a problem.

    After you have installed the RPM package, go to the /usr/boot directory and create a file named conf.lam and type in a single line in it: lamd $inet_topo. The same directory will also have a file named bhost.def else create it and type in a single line in it: localhost. Now to test whether everything is working correctly, type at the prompt, lamboot. You will get the following response:

    [rahul@joshicomp boot]$ lamboot
    
    LAM 6.3.1/MPI 2 C++/ROMIO - University of Notre Dame
    
    [rahul@joshicomp boot]$
    

    If the output indicates an error, then there is some problem with the installation, either follow the above steps or see the lamboot(1) manual page for troubleshooting.

    Assuming that LAM/MPI is properly installed on your system, let us again write a small demonstration program for MPI.

    4.1 A Demonstration MPI Program

    We will again write a simple master - slave program in which we are supposed to evaluate the expression (a + b) * (c - d). The master will read the values of a, b, c, and d from the user and one slave will calculate (a + b) and the other one will calculate (c - d). The program is as follows.


      1 /* -------------------------------------------------------------------- *
      2  * mpi_demo.c                                                           *
      3  *                                                                      *
      4  * A simple MPI demonstration program to evaluate an expression.        *
      5  * -------------------------------------------------------------------- */
      6 #include <stdio.h>
      7 #include <stdlib.h>
      8 #include <lam/mpi.h>            /* for MPI constants and functions      */
        
      9 #define MSG_DATA        100     /* message from master to slaves        */
     10 #define MSG_RESULT      101     /* message from slave to master         */
        
     11 #define MASTER          0       /* rank of master                       */
     12 #define SLAVE_1         1       /* rank of first slave                  */
     13 #define SLAVE_2         2       /* rank of second slave                 */
        
     14 /* functions to handle the tasks of master, and the two slaves          */
     15 void master(void);
     16 void slave_1(void);
     17 void slave_2(void);
        
     18 int main(int argc, char** argv)
     19 {
     20     int myrank, size;
     21     
     22     /* initialize the MPI system                                        */
     23     MPI_Init(&argc, &argv);
        
     24     /* get the size of the communicator i.e. number of processes        */
     25     MPI_Comm_size(MPI_COMM_WORLD, &size);
        
     26     /* check for proper number of processes                             */
     27     if(size != 3)
     28     {
     29         fprintf(stderr, "Error: Three copies of the program should be run.\n");
     30         MPI_Finalize();
     31         exit(EXIT_FAILURE);
     32     }
     33     
     34     /* get the rank of the process                                      */
     35     MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
        
     36     /* perform the tasks according to the rank                          */
     37     if(myrank == MASTER)
     38         master();
     39     else if(myrank == SLAVE_1)
     40         slave_1();
     41     else
     42         slave_2();
        
     43     /* clean up and exit from the MPI system                            */
     44     MPI_Finalize();
        
     45     exit(EXIT_SUCCESS);
     46 } /* end main() */
        
     47 /* function to carry out the masters tasks          */
     48 void master(void)
     49 {
     50     int a, b, c, d;
     51     int buf[2];
     52     int result1, result2;
     53     MPI_Status status;
        
     54     printf("Enter the values of a, b, c, and d: ");
     55     scanf("%d %d %d %d", &a, &b, &c, &d);
        
     56     /* send a and b to the first slave              */
     57     buf[0] = a;
     58     buf[1] = b;
     59     MPI_Send(buf, 2, MPI_INT, SLAVE_1, MSG_DATA, MPI_COMM_WORLD);
        
     60     /* send c and d to the secons slave             */
     61     buf[0] = c;
     62     buf[1] = d;
     63     MPI_Send(buf, 2, MPI_INT, SLAVE_2, MSG_DATA, MPI_COMM_WORLD);
        
     64     /* receive results from the slaves              */
     65     MPI_Recv(&result1, 1, MPI_INT, SLAVE_1, MSG_RESULT, 
     66              MPI_COMM_WORLD, &status);
     67     MPI_Recv(&result2, 1, MPI_INT, SLAVE_2, MSG_RESULT, 
     68              MPI_COMM_WORLD, &status);
        
     69     /* final result                                 */
     70     printf("Value of (a + b) * (c - d) is %d\n", result1 * result2);
     71 } /* end master() */
        
     72 /* function to carry out the tasks of the first slave       */
     73 void slave_1(void)
     74 {
     75     int buf[2];
     76     int result;
     77     MPI_Status status;
     78     
     79     /* receive the two values from the master       */ 
     80     MPI_Recv(buf, 2, MPI_INT, MASTER, MSG_DATA, MPI_COMM_WORLD, &status);
     81     
     82     /* find a + b                                   */
     83     result = buf[0] + buf[1];
        
     84     /* send result to the master                    */
     85     MPI_Send(&result, 1, MPI_INT, MASTER, MSG_RESULT, MPI_COMM_WORLD);
     86 } /* end slave_1() */
        
     87 /* function to carry out the tasks of the second slave      */
     88 void slave_2(void)
     89 {
     90     int buf[2];
     91     int result;
     92     MPI_Status status;
     93     
     94     /* receive the two values from the master       */
     95     MPI_Recv(buf, 2, MPI_INT, MASTER, MSG_DATA, MPI_COMM_WORLD, &status);
     96     
     97     /* find c - d                                   */
     98     result = buf[0] - buf[1];
        
     99     /* send result to master                        */
    100     MPI_Send(&result, 1, MPI_INT, MASTER, MSG_RESULT, MPI_COMM_WORLD);
    101 } /* end slave_2() */
        
    102 /* end mpi_demo.c */
    

      1 # Makefile for MPI demo program - makefile.mpidemo
      2 .SILENT:
      3 CFLAGS=-I/usr/include/lam -L/usr/lib/lam
      4 CC=mpicc
        
      5 mpi_demo : mpi_demo.c
      6     $(CC) $(CFLAGS) mpi_demo.c -o mpi_demo
    

    To compile this program, give the command make -f makefile.mpidemo. Once you have compiled the program, to run the program you first need to ``start'' or ``boot'' the Local Area Multicomputer system. This is done with the lamboot command. After that, to run the program by giving the following command: mpirun -np 3 mpi_demo.

    [rahul@joshicomp parallel]$ lamboot
    
    LAM 6.3.1/MPI 2 C++/ROMIO - University of Notre Dame
    
    [rahul@joshicomp parallel]$ mpirun -np 3 mpi_demo
    Enter the values of a, b, c, and d: 1 2 3 4
    Value of (a + b) * (c - d) is -3
    [rahul@joshicomp parallel]$
    

    4.2 Explanation of the Program

    To use the MPI system and functions, you first need to include the header file mpi.h as is done in line 8. In case of PVM, different processes are identified with their task ID's. In case of MPI, the MPI system assigns each process a unique integer called as its rank beginning with 0. The rank is used to identify a process and communicate with it. Secondly, each process is a member of some communicator. A communicator can be thought of as a group of processes that may exchange messages with each other. By default, every process is a member of the communicator called MPI_COMM_WORLD. Although we can create new communicators, this leads to an unnecessary increase in complexity, so we suffice ourselves by using the MPI_COMM_WORLD communicator.

    Any MPI program must first call the MPI_Init() function. This function is used by the process to enter the MPI system and also do any specific initialization required by the system. Next, we get the size of the MPI_COMM_WORLD communicator i.e. the number of processes in it using the MPI_Comm_size() function. The first parameter is the communicator and the second is a pointer to an integer in which the size will be returned. Here, we need exactly 3 processes, one master and two slaves. After that, we get the rank by calling MPI_Comm_rank(). The three processes will have ranks 0, 1 and 2. All these processes are essentially identical i.e. there is no inherent master - slave relationship between them. So it is up to us to decide who will be the master and who will be the slaves. We choose rank 0 as master and ranks 1 and 2 as slaves. It can also be seen that we have included the code for both the master and the two slaves in the same program. Depending upon the rank, we choose to execute the appropriate function. Note that there is no spawning of processes as in PVM, and as we shall see, we choose to decide the number of process to be spawned from a command line argument rather than the program spawning slaves. Once the execution is finished, we must call the MPI_Finalize() function to perform final clean up.

    Let us now consider the master function. After reading the values of a, b, c, and d from the user, the master must send a and b to slave 1 and c and d to slave 2. Instead of sending the variables individually, we choose to pack them up in an array and send the array of 2 integers instead. It is always better to pack up the data you want to send into a single message rather than to send a number of messages for individual data items, this saves the communication overhead involved in passing the messages. Once the buffer is ready, unlike PVM, we do not need to pack or encode the data, MPI will manage these details internally. So we can directly call the MPI_Send() function to send the data. The first parameter (line 59) is the address of the buffer, the second one the number of elements in the message, the third is a specification of the data type of the buffer, which here is MPI_INT specifying that the buffer is an array of integers. Next comes the rank of the process to which we want to send the message. Here it is SLAVE_1 (#defined as 1). Next is the message tag similar to that in case of PVM. Final parameter is the communicator of which the receiver is a member, which in this case, is MPI_COMM_WORLD.

    Once the data is distributed among the slaves, the master must wait for the slaves to send the results. For simplicity, we first collect the message from the slave 1 and then from slave 2. To receive a message, we use the MPI_Recv() function. Again, packing and decoding is handled by MPI internally. The first argument (line 65) is the address of the buffer in which to receive the data. The second is the size of the buffer in terms of the number of elements, which in this case is 1. Next is the data type, which is MPI_INT here. Next three parameters specify the rank of the source of the message, the tag of the expected message and the communicator of which the source is the member. The final argument is a pointer to a structure of type MPI_Status in which some status information will be returned (however, we ignore this information). Now that you know about the basic MPI terms, the slave_1() and slave_2() functions should be clear.

    In this program, the code for the master as well as the slaves was in the same executable file. Later on we will see how we can execute multiple executables. From the makefile, we see that to compile the MPI program, a wrapper program mpicc is provided which links the required libraries automatically. To run the program, use the mpirun -np 3 mpi_demo command after booting the LAM. Here we specify LAM to create 3 processes, one master and two slaves.

    4.3 The Addition Program Again

    Let us now re implement the addition program that we designed before using MPI. Here we will also show you how to execute separate programs in MPI. When we use a single executable in the MPI program, we call it Single Program Multiple Data (SPMD) application. When two or more executables are involved, we call it Multiple Program Multiple Data (MPMD) application. With LAM, MPMD programs are executed with the help of an application schema. But before that, let us see the source of the master and the slave programs.


      1 /* -------------------------------------------------------------------- *
      2  * master_mpi.c                                                         *
      3  *                                                                      *
      4  * Master program for adding the elements of an array using MPI         *
      5  * -------------------------------------------------------------------- */
      6 #include <stdio.h>
      7 #include <stdlib.h>
      8 #include <lam/mpi.h>        /* MPI constants and functions              */
      9 #include "tags.h"           /* tags for different messages              */
     10 #include "common.h"         /* common constants                         */
        
     11 int main(int argc, char** argv)
     12 {
     13     int size, i, sum;
     14     int items[SIZE];
     15     int results[NUM_SLAVES];
     16     MPI_Status status;
        
     17     /* initlalize the MPI System                */
     18     MPI_Init(&argc, &argv);
        
     19     /* check for proper number of processes     */
     20     MPI_Comm_size(MPI_COMM_WORLD, &size);
        
     21     if(size != 5)
     22     {
     23         fprintf(stderr, "Error: Need exactly five processes.\n");
     24         MPI_Finalize();
     25         exit(EXIT_FAILURE);
     26     }
        
     27     /* initialize the `items' array             */
     28     for(i = 0; i < SIZE; i++)
     29         items[i] = i;
        
     30     /* distribute the data among the slaves     */
     31     for(i = 0; i < NUM_SLAVES; i++)
     32         MPI_Send(items + i*DATA_SIZE, DATA_SIZE, MPI_INT, i + 1,
     33                  MSG_DATA, MPI_COMM_WORLD);
        
     34     /* collect the results from the slaves      */
     35     for(i = 0; i < NUM_SLAVES; i++)
     36     {
     37         int result;
     38         
     39         MPI_Recv(&result, 1, MPI_INT, MPI_ANY_SOURCE, MSG_RESULT,
     40                  MPI_COMM_WORLD, &status);
     41         results[status.MPI_SOURCE - 1] = result;
     42     }
        
     43     /* find the final answer                    */
     44     sum = 0;
     45     for(i = 0; i < NUM_SLAVES; i++)
     46         sum = sum + results[i];
        
     47     printf("The sum is %d\n", sum);
        
     48     /* clean up and exit the MPI system         */
     49     MPI_Finalize();
        
     50     exit(EXIT_SUCCESS);
     51 } /* and main() */
        
     52 /* end master_mpi.c */
    

      1 /* -------------------------------------------------------------------- *
      2  * slave_mpi.c                                                          *
      3  *                                                                      *
      4  * Slave program for adding array elements using MPI.                   *
      5  * -------------------------------------------------------------------- */
      6 #include <stdio.h>
      7 #include <stdlib.h>
      8 #include <lam/mpi.h>        /* MPI functions and constants  */
      9 #include "tags.h"           /* message tags                 */
     10 #include "common.h"         /* common constants             */
        
     11 #define MASTER  0           /* rank of the master           */
        
     12 int main(int argc, char** argv)
     13 {
     14     int items[DATA_SIZE];
     15     int size, sum, i;
     16     MPI_Status status;
        
     17     /* initialize the MPI system            */
     18     MPI_Init(&argc, &argv);
        
     19     /* check for proper number of processes */
     20     MPI_Comm_size(MPI_COMM_WORLD, &size);
        
     21     if(size != 5)
     22     {
     23         fprintf(stderr, "Error: Need exactly five processes.\n");
     24         MPI_Finalize();
     25         exit(EXIT_FAILURE);
     26     }
        
     27     /* receive data from the master         */
     28     MPI_Recv(items, DATA_SIZE, MPI_INT, MASTER, MSG_DATA,
     29              MPI_COMM_WORLD, &status);
        
     30     /* find the sum                         */
     31     sum = 0;
     32     for(i = 0; i < DATA_SIZE; i++)
     33         sum = sum + items[i];
        
     34     /* send the result to the master        */
     35     MPI_Send(&sum, 1, MPI_INT, MASTER, MSG_RESULT, MPI_COMM_WORLD);
        
     36     /* clean up and exit MPI system         */
     37     MPI_Finalize();
        
     38     exit(EXIT_SUCCESS);
     39 } /* end main() */
        
     40 /* end slave_mpi.c */
    

      1 # Makefile for MPI addition program - makefile.mpiadd
      2 .SILENT:
      3 CFLAGS=-I/usr/include/lam  -L/usr/lib/lam
      4 CC=mpicc
        
      5 all : master_mpi slave_mpi
        
      6 master_mpi : master_mpi.c common.h tags.h
      7     $(CC) $(CFLAGS) master_mpi.c -o master_mpi
        
      8 slave_mpi : slave_mpi.c common.h tags.h
      9     $(CC) $(CFLAGS) slave_mpi.c -o slave_mpi
    

    To compile the programs, type make -f makefile.mpiadd. (The files common.h and tags.h are the same as used for the PVM program.) This will create the master_mpi and slave_mpi executables. Now how do we tell MPI to run both these executables. This is where application schema file comes in. The application schema file specifies the executables to be run, the nodes on which to run and the number of copies of the executable to run. Create a new file add.schema and type in it the following lines:

    # Application schema for the addition program using MPI
    n0 master_mpi
    n0 -np 4 slave_mpi
    

    This file specifies that MPI should start 1 copy of the master (which will have rank 0) and 4 copies of slaves on the node n0, i.e. the local node. You can specify many more parameters in this schema file like command line arguments etc., see the manual page appschema(1). Once the schema file is ready, you can run the programs as follows:

    [rahul@joshicomp parallel]$ lamboot
    
    LAM 6.3.1/MPI 2 C++/ROMIO - University of Notre Dame
    
    [rahul@joshicomp parallel]$ mpirun add.schema
    The sum is 4950
    [rahul@joshicomp parallel]$
    

    Much of the program should be easy to understand. On line 39, when receiving intermediate results from the slaves, we specify the source as MPI_ANY_SOURCE, since we want to respond to slaves in the order in which they complete the calculations, as discussed earlier. In this case, the status structure contains the actual source in the field MPI_SOURCE. We use this information to set the appropriate element from the results array to the intermediate result received.

    In case you have a network of interconnected computers, you can make programs run on many computers by suitably modifying the application schema file. Instead of specifying n0 as the host, specify the name of the host and the number of processes you wish to schedule on that host. For more information about this, see the manual pages and the references.

    5. Conclusion

    We have seen how to write parallel programs using the PVM and MPI libraries. Since there libraries are available on many platforms and these are the defacto standards used for implementing parallel programs, programs written with PVM or MPI will run with little or no modification on large scale machines, if the need arises. What we have basically concentrated on in this article is the point to point communication functions provides by these libraries and their use in message passing. Apart from these facilities, both PVM and MPI provide a number of advanced features such as collective communication (broadcasting or multicasting), process groups and group management, reduction functions etc. You are welcome to explore these advanced features. These public domain softwares enable us to use a network of computers as a single large computer, so in case you have some such large problem to solve, you may consider using a network at your college or office. You will have to refer to the books given below for the exact details of how such a setup may be established. Many tutorials as well as books are available to help you. Below is a list of the material I referred.

    1. PVM: Parallel Virtual Machine - A User's Guide and Tutorial for Networked Parallel Computing, Al Geist, Adam Beguelin, Jack Dongarra, Robert Manchek, Weicheng Jiang and Vaidy Sunderam, MIT Press. Available at www.netlib.org
    2. MPI: The Complete Reference, Marc Snir, Steve Otto, Steven Huss-Lederman, David Waker and Jack Dongarra, MIT Press. Available at www.netlib.org.
    3. RS/6000 SP: Practical MPI Programming,Yukiya Aoyama and Jan Nakano, International Techical Support Organization, IBM Corporation, www.redbooks.ibm.com.
    4. A Beginner's Guide to PVM Parallel Virtual Machine, Clay Breshears and Asim YarKhan, Joint Institute of Computational Science, University of Tennessee, USA. www-jics.cs.utk.edu/PVM/pvm/_guide.html.
    5. PVM: An Introduction to Parallel Virtual Machine, Emily Angerer Crawford, Office of Information Technology, High Performance Computing, www.hpc.gatech.edu/seminar/pvm.html.

    6. Acknowlegements

    I would like to thank my project guide Dr. Uday Khedker for his encouragement and help. I would like to thank the Center for Developement of Advanced Computing for allowing me to run the MPI and PVM programs on the PARAM Supercomputer and Dr. Anabarsu for guiding me during the implementation.


    Copyright © 2001, Rahul U. Joshi.
    Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 65 of Linux Gazette, April 2001

    "Linux Gazette...making Linux just a little more fun!"


    Web Portals Bank on Open-Source Infrastructure

    By Ned Lilly


    Wireless Developer Network (WDN) and GeoCommunity say sayonara to the database big boys.

    When a single company formed to operate two web portals for online communities, they turned to the biggest names in the business to build their technology infrastructure. Wireless Developer Network (WDN) for wireless communications professionals, and its sister site, GeoCommunity for geographic information specialists, used hosting services to license the use of Microsoft and Oracle respectively. The two sites were up and running in no time, offering virtual homes to thousands of professionals who needed industry news, software downloads, product reviews and live chats. But while the portals seemed a success on the surface, disaster lurked not far below.

    Within months, WDN ran into performance problems with SQL Server and security issues with MS IIS. Oracle 8i worked like a dream for the GeoCommunity, but the licensing fees threatened to crush the small company. As the portal expanded capacity and users, it knew Oracle's aggressive pricing structures would cut even deeper into its slim profit margin. That's when the company's technical staff began to push a radical concept: ditch the big boys in favor of a single open-source technology platform for both web portals.

    The web portals had to be able to serve web pages up 24/7 without any crashes or service interruptions. The developers wanted to go with Linux because their experiences told them it was a stable and reliable platform for the web. They also knew firsthand that Apache web servers were a superior product, faster, more scalable and easier to configure, and optimized for the Linux platform.

    The technical staff had worked in both open-source and proprietary environments, and had come to believe open source was the more secure choice for web-based applications. In their view, open-source technologies grew up on the web, while most proprietary applications were later adapted to it. Throughout the open-source development process, developers drill down on security and performance issues for web applications. With hundreds of developers and users testing and tweaking the programs, security holes are often caught and corrected at an early stage of the development process.

    WDN's leadership quickly bought into the idea. However, many managers believe that "nothing good can be free" and subscribe to the common myth that open-source products lack professional technical support. But this company's senior management team was quickly sold on their developers' enthusiasm and positive experiences with open source.

    In fact, managers skittish about defying convention need only look around for evidence of the proliferation of open-source applications and tools in business. Linux powers an estimated 36 percent of Internet-connected servers today, while Apache web servers are on about 61 percent of public web sites, according to the Internet research firm, Netcraft. Industry analysts at Forrester Research recently identified open source as a powerful growing trend in business with potential to radically reshape the software industry by 2004.

    Yet the database market, the core of web-based businesses today, still remains firmly in the grip of proprietary vendors such as Oracle, Microsoft and IBM. But in recent years, open-source databases such as PostgreSQL and MySQL have evolved to the point where they're beginning to compete with the propriety giants in performance and functionality. They're attracting skilled development and user communities, as well as enterprise business users across a wide range of industries, and are rising up to challenge the proprietary status quo in the competitive database market.

    The heart of the portals

    For WDN and GeoCommunity, the most difficult decision related to its technology infrastructure was its choice of the right database management system. The portals needed a system that was scalable and functional enough to handle thousands of visitors each month and power scores of dynamic applications, including e-commerce.

    The portals tested two of the most widely used open source databases, MySQL and PostgreSQL. While MySQL was simple to configure and use, it lacked the transaction support and scalability that the company needed to run their highly interactive sites. MySQL has attracted a large user base, but the staff thought it seemed more suitable for lower-traffic web sites. The staff also ran rigorous tests on PostgreSQL, a heavy-duty object-relational database. It withstood the barrage of tests without flinching, supporting advanced features during heavy simulated transactions very well.

    After downloading their selection of open-source applications, including the Red Hat Linux operating system, Apache web server and PostgreSQL database, the technical staff configured the system in less than a half hour. The portals run 12 servers, with PostgreSQL powering dynamic applications such as book sales, message boards and mailing lists. The new system was up and running in minutes, with no interruptions, and neither web portal has since crashed or lost data.

    Confronting the FUD (fear, uncertainty and doubt) factor

    Yet there are good reasons why open-source technologies were once the exclusive domain of skilled hackers and expert users. In the past, these applications purposefully lacked the bells and whistles of their proprietary competition and were difficult for the less advanced user even to install. Open-source programs have become much more user-friendly over time because independent developers have begun to pay greater attention to improving tools, additional features and perhaps most importantly, documentation.

    For WDN and GeoCommunity, the decision to migrate from a proprietary to an open-source system was less difficult than for most traditional businesses. The web portals employ technical staff with experience in both environments. At every level, the company embraced the idea of adopting a more flexible, and less financially draining, open-source alternative. They understood the open-source development model and bought into its underlying philosophy. Just as importantly, they possessed the technical skills to confront many of the issues that could arise in an open-source platform. In fact, with access to their new system's code, they could now even modify their software's features to better fit the company's needs.

    Many e-businesses like these web portals, along with brick and mortar retailers that are moving into e-commerce, have similar needs, but lack the background and technical expertise to easily integrate open-source technologies or migrate to a fully open-source platform. These businesses simply want web sites that their customers and vendors can use without difficulty. They need database-enabled web applications with 24/7 availability that won't crash or lose data--even with thousands of daily transactions. They want a site that always works, convenient ways for customers to buy their products, and secure methods through which to bring in their money. Because database applications are so critical to their mission, many businesses adopt well-known proprietary systems, feeling confident these companies will deliver quality and reliability. Yet the rising costs, the uncertain economy, and in some cases, the surprisingly unpredictable performance of commercial applications, all have begun to spark greater interest in open-source technologies today.

    Still, these businesses are understandably skeptical about open source. They're used to the proprietary business model, and can't quite fathom why good software applications would be available to download for free from the Internet. The fact that these applications are not owned by a corporation causes suspicion and concern; if no single vendor owns it, people assume the software is not secure, powerful or reliable, and that it lacks accessible support and services. And the idea of thousands of independent developers around the world collaborating to create free software strikes many business managers as chaotic, which makes them even more reluctant to trust the results.

    Slowly, these businesses are becoming educated about the open-source development model, which evolved not to make money, but to produce functional software efficiently. They're finding that while the development process varies for each open-source application today, the best projects most often attract a global community of highly skilled developers. And it's becoming clearer that these systematic meritocracies encourage rigorous testing and rapid development rates, and result in fewer bugs and security holes and more frequent releases of new and improved features.

    A growing number of e-businesses such as WDN and the GeoCommunity are building their businesses on open-source platforms. These web portals have found that the software's fast-evolving development cycles, its lower costs, and its customizability make it ideal in their high-growth, quick-changing industries. The lower overall cost of open-source software is attractive to small and mid-sized businesses like these, who often have to spend thousands--even hundreds of thousands--of dollars on purchasing or licensing proprietary applications alone.

    Another important advantage is that its code is open and modifiable. The open-source model rests on the belief that software develops faster and better when its source code is accessible to all skilled developers. Mature open-source technologies such as Apache, Linux, and PostgreSQL have thrived under the principles of open collaboration. Similarly, businesses that employ open source technologies can benefit both from open source's accelerated development model and free access to its internal code, which enables them to modify the code as needed. Open-source technology is highly conducive to innovation, and ensures that most of the applications it produces improve continuously and quickly. Companies that use it usually find that their software programs evolve as quickly as their businesses do.

    Why isn't everyone using open source?

    The perceived lack of professional support services for open-source software remains the stumbling block to its widespread use in business and industry. Business managers want to be able to call a service center when problems arise. In the past, those experiencing problems with open-source applications could send out an e-mail and usually within hours receive the right solution from the developers themselves. These informal networks of technical support can provide the highest possible levels of support, but they are not always immediately available, nor can they scale to meet the growing demands.

    The issue of technical support was important to WDN and GeoCommunity because they knew they would occasionally need a technical safety net. They purposely chose applications with strong development communities in order to get the help they need directly from their web sites. Their technical staff concedes that companies without their own in-house technical staff need more comprehensive support. It's the one issue that continues to scare managers away from open source.

    Fortunately, entrepreneurs always rush to fill a vacuum. Red Hat was one of the first companies to provide support for the Linux operating system, and now a slew of other companies are springing up to provide support, training and consulting services for some of the best open source database applications. The open-source support gap is shrinking fast, which is good news for emerging companies in need of an affordable platform for their growing business.

    In the meantime, the WDN and GeoCommunity remain satisfied with their open-source decision, and the reliability, error reporting, community support, clean designs and strict adherence to industry standards that came with it. It's also refreshing that with open source, there can be no effort to force "lock-in" or add proprietary hooks that will prevent transition to other products in the future. As one senior staff member said, "It's truly been a liberating experience to use good products that were designed simply to meet a need--not to further a corporate agenda."


    Copyright © 2001, Ned Lilly.
    Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 65 of Linux Gazette, April 2001

    "Linux Gazette...making Linux just a little more fun!"


    Finding my computer at home from the outside

    By Mark Nielsen


    1. Introduction
    2. Perl script uploading ip address.
    3. Webpage and perl script on remote computer.
    4. Cron job I run in the background.
    5. Conclusion
    6. References

    Introduction

    The purpose of this article is to make it so I can find my computer at home when I am traveling around the Bay Area doing computer work, recruiting, and volunteer work. Most of the time, I am busy traveling around, although I am able to work from home half the time now. My computer at home uses a Ricochet modem. The dumb people who promised me a good DSL connection and a satellite connection where I live were a bunch of morons. The max DSL I could get would be 144k (which I found out AFTER I moved in), which is pointless when I already have a Ricochet modem at 128k. Plus, I am facing the wrong way for a satellite connection. Whatever you do, make sure the morons who sell you their apartments have it in the contract that you are promised certain speed connections to the internet, or you can break the contract with no penalty. As soon as it is worth, I am moving. For now, I am stuck with a dial-up connection, which isn't bad most of the time.

    Some people can have static DSL connections, which takes of the problem I have, which is my ip address to the internet changes each time I dial up. I used to email myself the ip address, parse out the data, and put it on a webpage. I have a better solution now. I use ssh to transfer a file to my remote web server once an hour.

    Setting up ssh.

    The version of ssh I am using is 1.2.27. I should be using OpenSSH, but for now, I am using commercial ssh.

    We need to make it so we can transfer files securely from my computer at home to the remote computer. We use the ssh-keygen program (which comes with ssh). Here is a paragraph from the manpage for ssh.

    Ssh implements the RSA authentication protocol automati­ cally. The user creates his/her RSA key pair by running ssh-keygen(1). This stores the private key in .ssh/iden­ tity and the public key in .ssh/identity.pub in the user's home directory. The user should then copy the iden­ tity.pub to .ssh/authorized_keys in his/her home directory on the remote machine (the authorized_keys file corre­ sponds to the conventional .rhosts file, and has one key per line, though the lines can be very long). After this, the user can log in without giving the password. RSA authentication is much more secure than rhosts authentica­ tion.
    So I ran "ssh-keygen" as a user on my computer at home. Then I transferred the ".ssh/identity.pub" file on my computer at home to the remote computer as ".ssh/authorized_keys" for the user "web1" on the remote computer. This makes it so I can login in from home to my remote computer without having to use a password. This can also be used to transfer files.
    rsync -e ssh -av /home/test1/IP.txt web1@somecomputer.com:public_html/IP.txt
    

    Perl script uploading ip address.

    Here is the perl script I use to upload the ip address. You should change values of the usernames and remote computer address. [Text version of this script.]
    #!/usr/bin/perl
    
    use strict;
    
      ### Run ifconfig and store the data in the @Temp list. 
    my @Temp = `/sbin/ifconfig`;
    
      #### Search for ppp
    my $Search = "ppp";
      ### If you are looking for the ip address of your ethernet card, 
      ### uncomment the next line;
    # $Search = "eth0";
    
      ### Make the line we find the ip address blank initially.
    my $Match_Line = "";
    my $Match_Device = "no";
    
      ## Search through the lines, if we find a match, save the lines until
      ## we find a blank line. 
    
    foreach my $Line (@Temp)
      {
        ### If we have a match, abort. 
      if ($Match_Line ne "")   {@Temp = ();}
        ### else, see if we can find a match at the beginning of line;
      elsif ($Line =~ /^$Search/) {$Match_Device = "yes";}
        ### else, if we found the device, and we find the line we are looking for
      elsif (($Match_Device eq "yes") && ($Line =~ /^ +inet/)) 
        {$Match_Line = $Line;}  
      }
    
      ## If our $Match_Line is not blank, split it and get the ip address.
    my $IP = "";
    if ($Match_Line ne "") 
       {
        ### Get rid of stuff before addr:
       my ($Junk,$Good) = split(/addr\:/, $Match_Line,2);
        ### Get rid of stuff after the first space
       my ($Good,$Junk) = split(/ /, $Good,2);
       $IP = $Good;
       }
    
      ## If $IP is not blank, we have something. Save to file and transfer file
      ## to remote site. 
      ### Please don't use the /tmp to store this file, but some other location.
    if ($IP ne "")
      {  
      open(FILE,">/tmp/IP.txt");
      print FILE "$IP\n";
      close FILE;
      system ('rsync -av -e ssh /tmp/IP.txt web1@somecomputer.com:public_html/IP.txt');
      }
       ### Else, we should send ourselves an email, or do something
       ### to let us know it didn't work. This is left as an exercise.
    else {}
    

    Webpage and perl script on remote computer.

    On the remote computer storing the ip address, we need to detect if it is an hour old. If it is less than an hour old, we should print out an error message. So I use this perl script. I name it "/home/web1/public_html/IP.pl". [Text version of this listing.]
    #!/usr/bin/perl
    
    use strict;
    
    print "Content-type: text/html\n\n\n\n";
    
    my $File = "/home/web1/public_html/IP.txt";
    open(FILE,"/home/web1/public_html/IP.txt");
    my $Line = <FILE>;
    chomp $Line;
    close FILE;
    
    my ($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,
       $atime,$mtime,$ctime,$blksize,$blocks)
         = stat($File);
    my $time = time();
    
    print "<br> Last known ip address was $Line\n";
    print qq(<br> To transfer to the website, 
         <a href="http://$Line">click here</a>\n);
    
    my $Diff = $time - $mtime;
    if ($Diff > 4000) 
      {
      print "<p>ERROR: The ip address should have been updated once an hour, 
      but 4000 seconds has past since the last update.
      <br> $time - $mtime = $Diff \n";
      }
    

    You may want to consider moving this perl script into the normal cgi-bin directory of your web server. Otherwise, here is a dangerous example of how to make it so you can run perl scripts from a user's directory. THIS IS DANGEROUS! If your web server allows any user to execute a perl script, that person can get the web server to do anything they want.

    To make it so you can execute perl scripts on your web server,

    
    <Directory /home/*/public_html>
       ## Options All is reduntant with some of the other options. 
        Options All Indexes FollowSymLinks MultiViews ExecCGI Includes 
        AllowOverride All
        Order allow,deny
        Allow from all
    </Directory>
    
       #### This requires several perl apache modules
     <Files *.pl>
     SetHandler perl-script
     PerlHandler Apache::OutputChain Apache::SSIChain Apache::Registry 
     PerlSendHeader On
     Options ExecCGI
     </Files>
    

    The Cron entry to make it run nightly

    Put this in your crontab on the remote server using the "crontab -e" command.
    #/bin/sh
    
      ### Download every two hours
    1 * * * *   /www/Cron/Remote_Website.pl >> /www/Cron/out  2>&1  
    

    Conclusion

    I know people are probably doing the same thing in different ways. I like this solution because the files are transferred securely. This makes it so people can't see what I am transferring over the internet. So that nobody can get to the file, we should password protect the webpage and perl script that display the ip address.

    References

    1. ssh
    2. OpenSSH
    3. Apache
    4. If this article changes, it will be available here http://www.gnujobs.com/Articles/17/Remote_Website.html

    Mark works as an independent consultant donating time to causes like GNUJobs.com, writing articles, writing free software, and working as a volunteer at eastmont.net.


    Copyright © 2001, Mark Nielsen.
    Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 65 of Linux Gazette, April 2001

    "Linux Gazette...making Linux just a little more fun!"


    Learning Perl, part 3

    By Ben Okopnik



    The trouble with teaching Perl as a first computer language is that your students won't appreciate it till they start learning their second. The trouble with teaching Perl as a second language is that there's no single suitable first language to go in front.
     -- Larry Wall

    When they say that Perl is a `glue language', what they really mean is that it is good for cleaning up after the mistakes of other programs.
     -- Mark-Jason Dominus in comp.lang.perl.misc
     
     

    Overview

    This month, we'll look at Perl's conditional and looping constructs, and look at a few scripts that use them. We will also explore how they work with Perl's variables, and take a quick look at capturing user input. Once you understand this part, I suggest hacking out a couple of experimental scripts and playing with them; sure, you'll make mistakes - but from this point on, you'll actually need to supplement your reading by getting down and dirty. If you don't play, you can't win...
     
     

    Conditionals

    Here are the conditional statements that Perl uses; nothing particularly unusual, if you're used to conditionals in other languages. Perl checks if the condition is true or false, and branches the execution based on that.



    if    ( traffic_light_is_red ) {     # If condition 1 is true, do
               stop;                     # Action 1
    }
    elsif ( traffic_light_is_yellow ) {  # If condition 2 is true, do
          hit_the_gas;                   # Action 2
    }
    else  {
                                         # In all other cases, do
          proceed_with_caution;          # Action 3
    }

    Note that the "elsif" clause isn't required; neither is the "else". Also, note that "else" is the 'catch-all option': if the light is anything except red or yellow - burned out, just got knocked down by accident, etc. - the action is 'proceed_with_caution'.

    Unlike C, even single actions must be enclosed in a block (defined by the curly brackets):

    if ( $tomato eq "red" )   print "Ripe.\n";      # WRONG!
    if ( $tomato eq "red" ) { print "Ripe.\n"; }    # Right


    unless ( $blarg == $foo ) {          # If condition 1 is false, do
           print "Unequal!.\n";          # Action 1
    }
    else   {                             # Otherwise, do
           print "They're equal.\n";     # Action 2
    }

    Pretty obvious. It may help to think of "unless" as the "if not" conditional. Once again, the "else" is optional. No, there's no such thing as "elseunless". :)
     
     

    Loops

    Ah, wonderful loops. These are the things that make actions happen, as many times as we want, based on a condition. You might even say that this loops are the main reasons for computers in general, their main use as the tool that they are: precise repetitive work. Here are the three most
    common types of loops under Perl:



    while ( $cat eq "away" ) {             # While cond. 1 is true, do
          print "The mice will play.\n";   # Action 1
    }



    until ( $time > 1159 ) {        # While cond. 1 is false, do
         print "It's morning.\n"    # Action 1
    }



    The "for" loop can be implemented in two different ways - one is like the "for" loop in C:

    for ( $n = 99; $n > 0; $n-- ) {
        print "$n bottles of beer on the wall, $n bottles of beer,";
        ...
    }

    In this case, we set $n to an initial value (99), decrement it by 1 each time we go through the loop, and check to make sure that it's greater than 0. If it's not, we exit the loop.

    The second method, somewhat like the Clipper, FoxPro, etc. "foreach" loops, is by far the most common:

    foreach $n ( 0..1000 ) {
            print "Day $n on this deserted island. So far, I've had ";
            print $n * 100, " bananas. I hope I'm rescued soon.\n";
            ...
    }

    It can also be used this way:

    for ( 0..1000 ) {
        print "Day $_ on this deserted island. So far, I've had ";
        print $_ * 100, " bananas. I hope I'm rescued soon.\n";
        ...
    }

    Our old friend, the "$_" (explained in the previous part of this series.) He does indeed come in handy. Note that "foreach" is just an alias for "for", and they can be used interchangeably.


    All of the above conditionals and loops can also be used as single-statement modifiers, as well:

    print "This is line $_ of 50.\n" for ( 1..50 );

    The above will print 50 lines, numbered in an obvious way.

    print "I've found him!" if /Waldo/;

    The above line will be printed if the default buffer ($_) contains a match for "Waldo".
     

    An interesting fact that combines well with loops and conditionals is that empty variables in Perl return a null value - which is "false". This is perfect for checking them out:

    print if $_;            # Prints $_ if it contains anything

    The next example shows that a zero value is also false:

    print "5280 is true.\n" if 5280;   # This will print.
    print "0 is true.\n" if 0;         # This won't print.
    

    Here's an example with a list:

    while ( @a ) {
          print pop @a;     # "Pop" the last value off @a and print it
          $count =  @a;     # Get the number of elements in @a
          print $count, " elements left in \@a.\n";
    }

    When the last element has been popped off, the loop will end.

    unless ( %hash ) {
           %hash = ( 'first' =>  'Mighty Joe',
                     'last'  =>  'Young',
                     'type'  =>  'gorilla',
                     'from'  =>  'Pangani Mountains',
                     'born'  =>  '1949',
                     'Mom'   =>  'Jill',
                     'Dad'   =>  'Gregg'
           );
    }

    If "%hash" is empty, we populate it with some initial values.
     

    The range operator, which we've used a couple of times so far, is a useful widget: it allows you to specify a range of numbers or letters. Note that the ranges have to be of the same 'kind' - if you specify ('a'..'Z') or ('A'..'z'), the output will not be what you expect. Also, you cannot specify ('z'..'a'); that won't work either. However, there is an easy way to do that:

    foreach $letter ( reverse 'a'..'z' ) {
        print "$letter\n";
    }

    It will also properly increment "letter lists":

    for ( 'aa'..'zz' ) {
        print "$_ ";        # Will print "aa ab ac ... zx zy zz"
    }
     
     

    User Input

    Capturing keyboard input, or input from STDIN in general - such as the lines piped to the input of our script via something like

    cat file | perl_script

     - is easy; it's what Perl's "diamond operator" is for.
     

    while ( <> ) {        # Capture all keyboard or piped input
          print;          # Print each line as long as input exists
    }

    The above works exactly like "cat" - it will print all input piped to it, will "cat" a file if it's run with the filename used as an argument, and will accept (and echo) user input until you hit Ctrl-D or Ctrl-C. It can also be written this way:

    print while <>;

    for a more "Perlish" syntax. Note that "<>" and "<STDIN>" are related but not equivalent:

    print while <STDIN>;

    will respond to keyboard and piped input, but will not print the contents of a file supplied as an argument. I've never found a situation where I needed that kind of functionality, so I simply use "<>".

    If you want to assign user input to a variable, Perl also makes that easy - but there's a bit of a trap built in of which you need to be aware:

    $answer = <>;        # Get the input, assign it to the variable
    if    ( $answer eq "y" ) {
          print "Yes\n";
    }
    elsif ( $answer eq "n" ) {
          print "No\n";
    }
    else {
          print "No idea!\n";
    }

    The above script will always print "No idea!" Hmm... it looks right; what could be the problem?

    The problem is that Perl captures everything that you give it. So, when you type "y", what's the next key you hit? "Enter", that's what! So, the variable stored in $answer is NOT "y", it's "y\n" - the answer and the linefeed. How do we deal with that? Perl, of course, has a function - one you should always use when getting user input:

    chomp ( $answer = <> );

    "chomp" will remove the linefeed, or "end-of-line" character, from the string to which it is applied. It will also remove EOLs from every element of an array which it receives as an argument. The old Perl4 version, "chop", removed the last character from a scalar (or from the elements of the array) no matter what it was; it's still available if you should need it for that purpose, but for taking user input, use "chomp" (also known, via Perl's error messages, as the "safe chop").
     
     

    Exercises For The Mind

    Try building a couple of scripts, just for your own education and entertainment:

    A script that takes a number as input, and prints "Hello!" that many times. As a bonus, check the input for illegal (non-numeric) characters (hint: use //, the match operator.)

    A script that takes the current hour (0-23) as input and says "Good morning", "Dobriy den'", "Guten Abend", or "Buenas noches" as a result. <grin>

    If you come up with something particularly clever, don't hesitate to send it to me for the next part of this series: you'll get the credit for writing it, I'll happily dissect it for you, and we'll both become micro-famous and retire to Belize on the proceeds. <laugh>

    Don't forget: your shebang line should always contain "-w". If you don't ask Perl to help you with your mistakes, you'll be wasting a lot of time. Let the computer do the hard work!
     

    #!/usr/bin/perl -w
    print "See you next month!"
     

    Ben Okopnik
    perl -we'print reverse split//,"rekcah lreP rehtona tsuJ"'


    References:

    Relevant Perl man pages (available on any pro-Perl-y configured
    system):

    perl      - overview              perlfaq   - Perl FAQ
    perltoc   - doc TOC               perldata  - data structures
    perlsyn   - syntax                perlop    - operators/precedence
    perlrun   - execution             perlfunc  - builtin functions
    perltrap  - traps for the unwary  perlstyle - style guide

    "perldoc", "perldoc -q" and "perldoc -f"


    Copyright © 2001, Ben Okopnik.
    Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 65 of Linux Gazette, April 2001

    "Linux Gazette...making Linux just a little more fun!"


    So You Like Color !!!
    (The mysterious ^[[ characters)

    By Pradeep Padala


    Have you ever redirected the output of a curses program with colors and wondered what those mysterious ^[[ are? Did you ever try to produce colors with a printf without using curses? If the answer to either of these questions is yes, read on...

    This article attempts to explain those mysterious characters that one finds in the output of a curses program which produces colors. Later on, we extend the concept to produce colors with a mere printf.

    Terminal Codes

    In the olden days of teletype terminals, terminals were away from computers and were connected to them through serial cables. The terminals could be configured by sending a series of bytes to each of them. All the capabilities of terminals could be accessed through these series of bytes which are usually called escape sequences because they start with an escape(0x1B) character. Even today with vt100 emulation, we can send escape sequences to the emulator and it will have the same effect on the terminal window. Hence, in order to print color, we merely echo a control code.

    Type this on your console.
    	echo "^[[0;31;40mIn Color"
    

    The first character is an escape character, which looks like two characters ^ and [. To be able to print that you have to press CTRL+V and then the ESC key. All the others are normal printable characters. You see the string "In Color" in red. It stays that way and to revert back type this

    	echo "^[[0;37;40m"
    

    As you can see it's pretty easy to set color and reset it back. There are a myriad of escape sequences with which you can do a lot of things like moving the cursor, resetting the terminal etc..

    The Color Code:     <ESC>[{attr};{fg};{bg}m

    I'll explain the escape sequence to produce colors. The sequence to be printed or echoed to the terminal is

    	<ESC>[{attr};{fg};{bg}m
    

    The first character is ESC which has to be printed by pressing CTRL+V and then ESC on the Linux console or in xterm, konsole, kvt, etc. ("CTRL+V ESC" is also the way to embed an escape character in a document in vim.) Then {attr}, {fg}, {bg} have to be replaced with the correct value to get the corresponding effect. attr is the attribute like blinking or underlined etc.. fg and bg are foreground and background colors respectively. You don't have to put braces around the number. Just writing the number will suffice.

    {attr} is one of following

    	0	Reset All Attributes (return to normal mode)
    	1	Bright (Usually turns on BOLD)
    	2 	Dim
    	3	Underline
    	5	Blink
    	7 	Reverse
    	8	Hidden
    
    {fg} is one of the following
    	30	Black
    	31	Red
    	32	Green
    	33	Yellow
    	34	Blue
    	35	Magenta
    	36	Cyan
    	37	White
    
    {bg} is one of the following
    	40	Black
    	41	Red
    	42	Green
    	43	Yellow
    	44	Blue
    	45	Magenta
    	46	Cyan
    	47	White
    

    So to get a blinking line with Blue foreground and Green background, the combination to be used should be

    	
    echo "^[[5;34;42mIn color"
    
    which actually is very ugly. :-) Revert back with
    echo "^[0;37;40m"
    

    With printf()

    What if you want to use this functionality in a C program? Simple! Before you printf something print this escape sequence to produce it in the desired color. I have written a small routine textcolor() which does this automatically. You can use it in your programs along with the #define constants. The text version of this program is here

    textcolor()

    #include <stdio.h>
    
    #define RESET		0
    #define BRIGHT 		1
    #define DIM		2
    #define UNDERLINE 	3
    #define BLINK		4
    #define REVERSE		7
    #define HIDDEN		8
    
    #define BLACK 		0
    #define RED		1
    #define GREEN		2
    #define YELLOW		3
    #define BLUE		4
    #define MAGENTA		5
    #define CYAN		6
    #define	WHITE		7
    
    void textcolor(int attr, int fg, int bg);
    int main()
    {	textcolor(BRIGHT, RED, BLACK);	
    	printf("In color\n");
    	textcolor(RESET, WHITE, BLACK);	
    	return 0;
    }
    
    void textcolor(int attr, int fg, int bg)
    {	char command[13];
    
    	/* Command is the control command to the terminal */
    	sprintf(command, "%c[%d;%d;%dm", 0x1B, attr, fg + 30, bg + 40);
    	printf("%s", command);
    }
    

    The textcolor() is modeled against the Turbo C API function. You call the function to set the color and then print with a sprintf() (a function used in Turbo C to produce console output in color).

    A Demo of colors

    #include <stdio.h>
    
    #define RESET		0
    #define BRIGHT 		1
    #define DIM		2
    #define UNDERLINE 	3
    #define BLINK		4
    #define REVERSE		7
    #define HIDDEN		8
    
    #define BLACK 		0
    #define RED		1
    #define GREEN		2
    #define YELLOW		3
    #define BLUE		4
    #define MAGENTA		5
    #define CYAN		6
    #define	WHITE		7
    
    #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
    
    char *attrs[] = {"NORMAL", "BRIGHT", "DIM", "UNDERLINE", "BLINK",
    		 "REVERSE", "HIDDEN", "EXIT"};
    char *colors[] = {"BLACK", "RED", "GREEN", "YELLOW", "BLUE", "MAGENTA",
    		 "CYAN", "WHITE", "EXIT"};
    void textcolor(int attr, int fg, int bg);
    int print_menu(char *array[], int n_options, char *title);
    int main()
    {	int attr, fg, bg;
    	int attr_size, colors_size;
    	
    	attr_size = ARRAY_SIZE(attrs);
    	colors_size = ARRAY_SIZE(colors);
    	while(1)
    	{	printf("\n");
    		attr = print_menu(attrs, attr_size, "Choose the attr you want:");
    		if(attr == attr_size - 1)
    			break;
    		fg = print_menu(colors, colors_size, "Choose the foreground you want:");
    		if(attr == colors_size - 1)
    			break;
    		bg = print_menu(colors, colors_size, "Choose the background you want:");
    		if(attr == colors_size - 1)
    			break;
    		printf("\n");
    		textcolor(attr, fg, bg);	
    		printf("This is what you get if you use the combination %s attribute %s foreground and %s
     background", attrs[attr], colors[fg], colors[bg]);
    		textcolor(RESET, WHITE, BLACK);
    		system("clear");
    	}
    	return 0;
    }
    
    int print_menu(char *array[], int n_options, char *title)
    {	int choice, i;
    	for(i = 0;i < n_options; ++i)
    		printf("%d.%s\n", i, array[i]);
    	printf("%s", title);
    	scanf("%d", &choice);
    	return choice;
    }		
    void textcolor(int attr, int fg, int bg)
    {	char command[13];
    
    	/* Command is the control command to the terminal */
    	sprintf(command, "%c[%d;%d;%dm", 0x1B, attr, fg + 30, bg + 40);
    	printf("%s", command);
    }
    

    This program asks the user to play with attributes and colors and shows a string in that color. I usually use it to find out the best combination of colors for my GUIs. Text version of above program is here .

    The Catch

    Then what's the catch? If producing color is so easy, why do people waste their time writing huge programs in curses, which in turn query terminfo in a complex way? As we know, there are many terminals with very few capabilities and terminals which don't recognize these escape codes or need different codes to achieve the same effect. So if you want a portable program which would run on any terminal with the same (or reduced) functionality, you should use curses. Curses uses terminfo to find the correct codes to accomplish the task in style. Terminfo is a big database which contains information about the various functionalities of different terminals.

    But if you just want to write a simple program which produces color on a Linux console or xterm window, you can just use the escape sequences above to do it easily. The Linux console mostly emulates vt100, so it recognizes these escape sequences.

    With tput

    But there is a way to query the terminfo database and do the work. tput is the command which queries the database and executes the functionality you specify. The two capabilities setf and setb are useful to set foreground and background colors. Use this to set foreground color to red and background color to green.

    	tput setf 4	# tput setf {fg color number}
    	tput setb 2	# tput setb {bg color number}
    

    This can be used in shell scripts where you want. See the tput manual page for additional capabilities of tput. The terminfo manpages contain a lot of information regarding terminal capabilities - how to get and set their values and more. There are two terminfo manpages. "man 5 terminfo" describes the terminfo database. "man 3ncurses terminfo" describes the C functions that use the database.

    These are the color numbers to be passed as arguments to "tput setf" and "tput setb".

    	0	Black
    	1	Red
    	2	Green
    	3	Yellow
    	4	Blue
    	5	Magenta
    	6	Cyan
    	7	White
    

    Have fun !!!

    References


    Copyright © 2001, Pradeep Padala.
    Copying license
    http://www.linuxgazette.com/copying.html
    Published in Issue 65 of Linux Gazette, April 2001

    "Linux Gazette...making Linux just a little more fun!"


    Book Review: Networking Printing

    By Dustin Puryear


    Network Printing
    O'Reilly and Associates
    October 2000
    ISBN 0-596-00038-3
    $34.95

    There are few applications so beneficial, pervasive, and--oftentimes--complex as network printing. Network printing is beneficial because it reduces the number of printers required for an organization. Allowing users to print to a limited set of printers, rather than requiring a printer for each user realizes an obvious reduction in capital cost. This also equates to a savings in space requirements and power consumption. (These are two often overlooked but important factors.) The reason network printing is so pervasive is directly a result of the benefit of network printing--it reduces cost, both in terms of capital outlays and maintenance.

    Unfortunately, network printing can also be quite complex. This is especially true for heterogeneous networks. In a heterogeneous network, not only do administrators need to worry about printers and print servers speaking the same lingo, but also that each device is actually using the same network layer protocols (i.e., TCP/IP). Even when a network is homogenous there can be difficulties, especially in large organizations where printers number in the hundreds or thousands.

    In order to combat this complexity, and with it's resulting rise in cost and overhead, an administrator needs a solid set of documentation and a framework from which to grow. O'Reilly has attempted to satisfy just this need with their release of "Network Printing," by Todd Radermacher and Matthew Gast. "Network Printing," published in October of 2000, provides a step-by-step guide for building an infrastructure to support network printing in heterogeneous networks (and, by extension, homogenous ones as well).

    So what exactly does Radermacher and Gast, the authors, bring to the table? Both Radermacher and Gast have several years of experience in the computer industry. They also both have a very readable writing style, and consistently speak to the reader in the first person. (This helps to engage the reader in the material, and often leads to more readable technical literature.) Now, on to the book!

    In Chapter 1, "A Brief History of Printing and Publishing", Network Printing begins with a general introduction to printing in general. By "in general" I mean the entire field of printing, and not just network printing. The authors give a quick overview of the history of printing, including the introduction of such notables as papyrus scrolls and the Linotype. Personally, I feel this type of material is usually best left to the history books, but you may disagree.

    The second chapter, "Printer Languages," progresses to the more relevant topic of page-description language. A page-description language is the lingua franca used by a print server and a printer. Common examples, and ones that are covered in the book, are Adobe's PostScript and Hewlett-Packard's Print Command Language (PCL). All in all, the authors do a good job of summarizing these languages. However, if you are looking for in-depth coverage, you will need to go elsewhere.

    Chapters 3, 4, and 5 concern three popular UNIX print systems currently in use: BSD, SysV, and LPRng. The emphasis of the book is on using UNIX as the central print server platform for an organization, so the concentration on these systems is important. (However, I would have liked to see more focus on NT print servers.) Special attention is paid to print filters, which form the core of the UNIX print process.

    In Part II, "Front-End Interfaces to UNIX queues," the authors begin with the requisite chapter on Samba. Chapter 6, "Connecting Windows to UNIX Servers: Let's Samba", describes deploying Samba on UNIX machines so that the servers can interface with Windows networks. Certainly, this book is not the end-all for documentation relating to Samba and it's various configuration options, but Gast and Radermacher cover it in enough detail to get the reader up and running.

    After the coverage on Samba and Windows environments, the authors move to a more underserved support issue in many books stressing UNIX solutions: integration with Macintosh and NetWare networks. In Chapters 7 and 8 the authors cover netatalk and ncpfs, respectively. Similar to the Samba chapter, the authors' main focus here is to educate the reader about the aspects of the support software relating to printing.

    In Part III, "Administration," Radermacher and Gast enter into one the more crucial aspects of network printing--effectively and efficiently administering the system. At this point the authors assume you have the knowledge to implement network printing for the various networks covered, and they move to making the system not only effective but also efficient.

    In Chapter 9, "Using SNMP to Manager Networked Printers," the authors demonstrate how to use SNMP to monitor and control your printer infrastructure. Of note is their good overview of SNMP, and review of SNMP agents, such as MRTG. Not the strongest chapter in the book, but more than sufficient.

    Next, in Chapter 10, "Using Boot Servers for Basic Printer Configuration," and Chapter 11, "Centralized Configuration with LDAP," the emphasis is on methods for maintaining a centralized configuration for all of the network printers. In small to medium networks these chapters may not be truly useful, but for large installations, centralized configuration is vital. The chapter on LDAP is especially informative, and offers several insights.

    Finally, in Chapter 12, "Accounting, Security, and Performance," the authors tie many loose ends left from earlier chapters. The main point of this chapter is demonstrating the use of scripts for accounting and monitoring and tuning server performance. The section on security is rather small unfortunately, and I would have liked to see more detail. Alas, it was not forthcoming.

    In conclusion, I think this was a rather well done book. The authors did an excellent job of keeping a rather boring subject (for most of us at least) somewhat upbeat. I also was quite happy to see several rather keen insights, especially the use of LDAP to pull configurations to print servers. If you are a network administrator that is not afraid of Linux or UNIX and need to better organize and control your printer infrastructure then this is an excellent resource.


    Copyright © 2001, Dustin Puryear.
    Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 65 of Linux Gazette, April 2001

    "Linux Gazette...making Linux just a little more fun!"


     

    OO Thinking 

     
     
     
     
     

    By Jason Steffler


    Article #5 - Apr 2001

    Abstract


        For those who haven't read the previous articles, be sure to read the statement of purpose first.  This month, we're going to discuss OO thinking.  For those looking to read the whole series locally or information about upcoming articles, you can check the MST page.  For those looking for further information on learning Squeak, here are some good resources.
        This is the last planned article for this series.  The reader interest has been high enough for me to continue with the next series, but unfortunately my available writing time has quickly dwindled :-( as my wife nears her due date :-)  So this will be the last regular article at least for a while.

    Quote of the day

    Reason never changed a man's opinion which by reason he never acquired.
            -- Mark Twain

    OO Thinking

        If you're just getting into OO from another programming background, you'll soon realize that it requires a change in the way that you think, the way you approach problems, and (IMHO) how much fun you're having.  This month, we go over some things to keep in mind when doing OO programming.
     

    Breaking Linear Thinking

        This is the first hurdle I've seen many people trip over.  They're so used to programs with a main() routine of some sort, that when they first dip their toes into the Smalltalk pool they're frightened off by not being able to find a linear beginning, middle, and end of something.  Realize that Smalltalk is about working with a group of collaborating objects.  To be sure, you will need some entry point to your code/application, however it likely be in the form of opening your starting window, then saving/stripping your image.
        Thinking of problems in terms of nouns and verbs (objects and responsibilities) is a more natural way of thinking, and often leads to a much different decomposition of the problem than functional decomposition.  Try to identify which objects are inherit to the problem, which objects need to involved to help out, then think of the most basic responsibilities and distribute them appropriately across the objects.
        This leads us to our next item:  OO programming lends itself well to iterative development.  It's a natural activity to define the basic objects, then start adding basic relationships and responsibilities.  If you find something doesn't fit right, then shift the responsibility elsewhere.  Flesh out your objects and responsibilities over time.
        Try to use short methods to help maximize reuse and maintainabilty.  If you find yourself writing 100 line methods, then you're still thinking linearly.  The average method length varies depending on whom you ask, but it should be short - somewhere around 8 statements or so.  Of course, there's always exceptions to any rule - this is just a rule of thumb.

    Decision Making vs Commanding

        This is what I often think is the most fundamental difference between OO programming and procedural programming.  In procedural programming it's common to do things in terms of decision making.  You do things like:
    • if this, then that, else that
      • For example, if data is an integer and user input is a float, then convert the float to an integer to add
    • for i = 1 to i = maxRange do this unless i > maxBounds, and if early break condition is met then break out of loop
    • 1 + 2 * 3 = ?
      • This example uses operator precedence, which is something that most languages have.  The statement is evaluated as: (1 + (2 * 3)) = 7.  But to determine precedence the language needs to make a decision which operator to precede.
        A common problem that arises from decision making programming is that you have similar decision making being done in several parts of a program.  Then when requirements or needs inevitably change, there are many different spots that you need to update/modify your program to update all the decision making spots.

        In OO programming, it's more common to do things in terms of commanding.  You command (or ask if you're polite) objects to do things.  If the object shouldn't do something, or should do something differently, then it should know that.  Since you ask different objects the same thing, and they respond as each of them should, there's no decision making.  You do things like:

    • object doSomething
      • For example, it doesn't matter that you're adding a float to an int, the float object knows how to add floats to itself, how to add ints to itself, how to add fractions to itself.
    • aCollection do: [:eachElement | eachElement doSomething]
      • Notice how there are no bounds checking - a collection object already knows how to do that and does it for you.
    • 1 + 2 * 3 = ?
      • In this example, remember integers are objects too in Smalltalk (part of the pure OO nature of Smalltalk).  So we're asking the object 1 to add itself to 2, then the resulting object to multiply itself by 3.  Hence, the statement is evaluated as: ((1 + 2) * 3) = 9.
      • As a side note, it's funny how often I've seen some of my C++ or Java coworkers flee from Smalltalk because this doesn't make sense to them.  They still haven't entirely made the shift to OO thinking.

    Don't Sweat the Details

        I once heard Alan Knight remark that you know somebody is starting to get Smalltalk if they answer the question:  "How does Transcript show: 'HELLO WORLD' work?" with: "I don't care".   A common theme among Smalltalk newbies is a need to know exactly how everything works, and step through all the methods of the objects from the library that they use.  This is related to linear thinking, in that you need to understand how a linear path flows to determine how it broke down the road.  If you find yourself sweating the details of the class library, then you're probably still in linear thinking mode.
        A related theme is that Smalltalk lends itself to top-down coding.  Put off work as long as possible and put off decisions as long as possible - abstract and stub out reponsibilities if you can.  It's a powerful feeling to define even a trivial system that works, then keep it working as you add real meat to it.  You're most often in a state of things working.

    Simplification by Encapsulation

        Try and group data together with appropriate operations in an object.  If you're acting directly on an object's data in some manner, then you're breaking encapsulation.  If you're doing something like:  anObject aDataAttribute aPartOfAttribute doSomething, then you're breaking encapsulation.
        A nice example of encapsulation is the looping noted above.  The collection class knows how many elements it has, and how to loop over its elements, and you're not concerned with bounds checking nor should you be.

    Reuse

    Opportunities for reuse abound, and not just from the usual place of inheritence

    ...through the class library

        Before coding something, browse the class library to see if it's already been done for you.  Reinventing the wheel is definately non-OO and wastes time.
        Another rule of thumb for knowing when you're getting Smalltalk is the proportion of time you spend browsing the class library to the proportion of time you spend coding.  As you gain experience and familiarity with the library, your proportion of time will go down, but for a beginner you should expect to spend the majority of your time browsing the library and the minority of your time coding.
        An appropriate remark I once heard (sorry, don't remember the source), during a LOC metrics flame war is that Smalltalkers should be measured by the LOC they don't write, as they're saving time and maintentance costs by reusing the class library.

    ...through goodies

        Smalltalk has a rich history and a great user community.  There may be a freeware or opensource goodie out there that will satisfy your needs.  Have a look at the UIUC repository, or search the web or ask the newsgroups for goodies.

    ...by approrpiate responsibilities

        If it isn't your reponsibility, then don't do it (or redo it).  Conversely, avoid responsibilities as much as possible (only take the appropriate responsibilities).  By trying to stick to only appropriate responsibilities, then you're more likely to reuse responsibilities elsewhere in the system.
        For example, don't have the responsibility to login to your application in your client's login GUI (a bad practice in general), if you later have a web GUI, then you need to either copy the login logic to your web GUI, or factor out the login code to a reusable object.

    ...through inheritence

        Now we finally get to reuse through inheritence.  I leave this for last, as reuse through inheritence has been (IMHO) overhyped and often overabused with needlessly deep class hierarchies that complicate maintenance.
        For example, if you're writing a hospital system you'd probably want to reuse a Person's characteristics of firstName, lastName, and socialSecurityNumber by making subclasses of Doctor and Patient.

    Distributing responsibilities

        Watch out for bloated parts of system - you can see this if you're drawing your system out and your diagram looks like an octopus.  This is a sign that there are too many responsibilities on one object, and that object is going to get harder to maintain as it bloats.  You should try and have groups of peer objects collaborating.
        Another warning sign is using a 'manager' object.  Again, there are perfectly good times and uses for a manager object, and it can be difficult to determine if you're abusing a manager object.  I like to use a rule of thumb I heard from Alan Knight: object managers should be like real world managers:  they should not do any real work - they should facilitate or manage interactions between other objects.

    A Sweet Squeak

    This month's sweet squeak is the release of Squeak 3.0! :-)  To be generic as possible, this description covers the scenario where you want to run Squeak in Windoze or Linux.   For this simple path install, on Linux you will need root priviledges.  (Note: you can install without root privilidges if you're familiar with updating your paths, I'm not going to cover that topic in this simple guide)
     

    Step 1:  Downloading Squeak 3.0

    Go to the FTP site: ftp://st.cs.uiuc.edu/pub/Smalltalk/Squeak/3.0 and download:
    • Squeak3.0-win.zip, includes:
      • Squeak.exe, the virtual machine (only good for Windoze, we'll need to compile a VM for linux)
      • Squeak3.0.image, (can use this on linux or Windoze)
      • SqueakV3.sources, (can use this on linux or Windoze)
    • Squeak-3.0pre2.tar.gz
      • Source files for compiling the linux VM

    Step 2: Set a base directory to run squeak from

    Assumes your Windoze mount point is /windoze, change for your system.

    Note:  if you don't have/want to run dual boot, just change your install location to be whatever you desire, for example: ~myuserid/squeak3, and delete the unnecessary files: NPSqueak.dll, Squeak.exe, SqueakFFIPrims.dll.

  • Make a /windoze/squeak3 directory
  • Unzip the Squeak3.1-win.zip file into a /windoze/squeak3 directory.
  • Step 3: Installing VM for linux

    This is a very easy thing to do - even if you've never programmed or compiled anything in your life before.  Here are the steps:
    1. Unzip Squeak-3.0pre2.tar.gz to wherever (be sure to unzip with directories, this unzips into a Squeak-3.0 directory)
    2. cd to where you unzipped the sources.  (The BUILD.UnixSqueak is a quick-n-easy guide from which these steps were condensed from)
    3. mkdir build
    4. cd build
    5. ../src/unix/configure --bindir="/windoze/squeak3"
    6. make
    7. make install  (NOTE:  here is where you'll need root privilidges with the default install, as stuff is copied to /usr/lib, /usr/man, etc)
      1. Here, you're going to get a couple of errors (unless you're installing to a Linux location), as you can't make links on a Windoze file system
      2. Copy the referenced files to your /windoze/squeak3 directory:
      3. cp /usr/lib/squeak/3.0/squeak /windoze/squeak3
      4. cp /usr/lib/squeak/3.0/inisqueak /windoze/squeak3

    Step 4: Start Squeak :-)

    • cd /windoze/squeak3
    • squeak Squeak3.0final.image
    ...I'll leave starting up Squeak in Windoze as an exercise for the reader  ;-)

    Quick tour

        When I started up the image for the first time, I was pleasently surprised that the default GUI to come up is the newer morphic GUI (as opposed to the older MVC GUI that was mentioned in Article 1).  For the read-along folks, you'll see (click on the below half size images for full size images):

    The entry screen.  The Squeak logo in the top right is an xeyes type of app, where the eyes follow the mouse.
        

    If you put the mouse curor over the logo, you'll notice the pop-up balloon help is enabled.
        

    If you click on the project at the bottom right of the screen, it'll zoom to full screen size as you enter it.
        

    And finally, lets click on the music project to have a look.
        
     

    Looking forward

        Alas, there will be no immediate looking forward due to my time constraints.   The next series I was planning covers some basic programming basics like:  unit testing (SUnit), source code management (change sets and SCAN), an object tour of commonly used objects, control structures, and Squeaklets.

        In the meantime though, I highly recommend downloading v3.0 of Squeak (noted below) and trying out the STP goodies as your first goodie exploration.  They're available from:  http://www.create.ucsb.edu/squeak/STP12.html

        I've enjoyed learning about Squeak over the past few months, and I hope you've enjoyed the series.


    Smalltalk Code

    Somebody pointed out to me that the ScopedBrowser used in Article 4 doesn't work properly in Squeak v3.0, so here's an updated version.
    Note:  I noticed that SUnit is now included as part of the base image now, so I've included some programmatic unit tests.  After loading the code, if you wish to run the unit tests, do: TestModel openAsMorph, then click the Run button.  You'll notice 8 windows pop up and close, and there shouldn't be any errors listed in the error pane.


    Copyright © 2001, Jason Steffler.
    Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 65 of Linux Gazette, April 2001

    "Linux Gazette...making Linux just a little more fun!"


    A Private Home Network

    By Jan Stumpel


    1. Introduction

    Until recently I paid little attention to the security of my home network for the following reasons:
    • A dial-up home installation will not attract much attention from crackers.
    • Linux is safe anyhow, compared to MS Windows.
    • The people who put together my Linux distribution have surely taken security into account.
    • I took some security measures (hosts.allow, hosts.deny, ipchains) following the examples in the HOWTO's.
    • I don't understand this security business anyway.
    Common human psychology suggests that this is how many ordinary Linux users think. In my case, unfortunately, all these points turned out to be pure wishful thinking, apart from the last one. Because of  the last one.

    How did I find this out? In order to prepare for the happy day in the future when permanent, high-speed connections to the Internet will be offered in my area, I decided it was a good idea to start investigating security issues. The results were shocking.

    The first shock came from looking at my long-neglected /var/log/syslog* files. A few 'refused connect from' entries. One 'connect from' to ftp which apparently succeeded. Oops. Dial-up Internet users are not overlooked by the crackers after all. And my security is not bullet-proof. Better to spend some time really looking at security. And to try to understand something of it this time. So this meant reading books, FAQ's, HOWTO's, and a lot of articles on the Web; and doing some experiments.

    This is the result of my investigations. Mind: I am not an expert, but just an amateur, a home user trying to make things work. Nothing of this comes with any guarantee.

    2. The system

    I have a very simple home network with two machines:
    • earth is a Win 95 machine without printer or modem.
    • heaven runs Debian Linux 2.1 (with various upgrades). It runs exim (for local mail service and for sending mail to the outside world), qpopper (pop3 server for use by earth), and samba (to provide file sharing and printing to earth). heaven connects on-demand to the ISP, opening a modem (ppp) link. Mail from the outside world is collected by means of fetchmail.
    This local network uses one of the IP address ranges reserved for private networks, 192.168.1.0/24. heaven is 192.168.1.1, earth is 192.168.1.2.

    The contents of /etc/hosts on heaven, and c:\windows\hosts on earth, is:

    127.0.0.1               localhost
    192.168.1.1             heaven.my.home  heaven
    192.168.1.2             earth.my.home   earth
    This shows that my network uses the domain name my.home. This name is unregistered and meant only for local use. Mail to the outside will have its 'message from' and 'envelope from' addresses translated (in the July and September issues, 1999, of LG I described how to do this).

    3. 'Private' home networks

    My notion of security is to have a private network. By this I mean a network that provides no public functions. It does not serve WWW pages or files. You cannot telnet into it. It does not even listen to anything coming in from the outside. If anyone comes knocking, there is simply no response. This idea was recently put forward by Sander Plomp, whose articles at rootprompt.org provided much of the inspiration for this piece.

    A LAN which is not connected to the Internet is a private network by definition. Unplug your modem, and you make your network private. But that is a not the kind of private network that I mean. I want to use the Net, send and receive mail, browse the Web, download files, etc. I just do not want anyone from the outside to enter my network.

    Linux systems generally aren't private networks. By default, the installation procedure of a Linux distribution sets up all sort of nice network services (like telnet, ftp, finger, etc.) which are accessible by anyone in the world, protected (if at all) only by a password. Also, Microsoft Windows home LAN's are generally not private. Connect two Win95 computers and enable 'file sharing', and the whole world can share your files while your Internet connection is up.

    To make non-private networks safe, various techniques are used; passwords of course, and also other security techniques that have been discussed in the Linux Gazette many times, like tcpd (alias tcp wrappers) and kernel-level packet filtering (with ipchains as the user interface). These techniques give some privacy to a system which is essentially public. They are like guards at the door, put there to keep out unwanted characters, while letting the desirable customers in. But why should there be doors at all? I don't want any customers. My network is private!

    If we have servers running, reachable by the outside world, we always have to worry about having made some configuration mistake which can be exploited. Also, server programs often have bugs in them which offer openings to crackers. Only recently one was discovered in named. OK, it was later patched, so people who got the newest version of named do not have to worry anymore about this bug. But what about the next one? Better not to to have any doors at all!

    If you really want to enable services (mp3 distribution, or whatever) for use by computers outside your home, you must study more advanced security techniques. But if you simply want a private network for the home, read on.

    4. How safe is your network?

    To test the safety of a home network, you can have it 'scanned' from the outside, for instance by Secure Design. Click 'scan me now' and 'basic scan'. Try it both from the gateway machine and from the other machines on your network. When I did this, I got the second shock. It was embarrassing. A long list showing possible entry points into my system: Samba shares, telnet, the print service, X, the mail server, ftp, finger, etc. I had some rudimentary safety measures in place, so the system was somewhat protected from serious intrusion (I hope). But I consider it a breach of my privacy that the outside world even knows that I have a mail server (let alone letting them break into it). These services were meant for use within my home network. They are not the business of the outside world in any way.

    So: have your system scanned! Apart from Secure Design, other scan services exist, e.g. Shields Up!, DSL Reports, Sygate Online Services, and many others. A whole lot of them can be found at an Austrian site, Sicherheit im Kabelnetzwerk ('Security in the Cable Network'; there is also an English version with almost the same information). Use several scan services. Print the results. What this 'scanning' actually means will hopefully become clearer in the course of this article.

    You can also 'scan' your system yourself by calling (after su-ing to root) netstat -pan --inet. Use a wide xterm window when doing this, because the output consists of rather long lines. Programs which have 0.0.0.0 in the 'Local Address' column are visible to the whole world!

    5. Servers and clients

    The distinction between servers and clients is not always clear to users. If you want to use ftp, for instance (getting files from, and putting files into, another computer) you use an ftp client program to connect to the other computer. If that is all you want to do with ftp, the client program is all you need. An ftp server is only needed if you want to allow others to get files from, or put them into, your computer. Similarly with telnet: a client program for your own use, a server program for other peoples' use. The client and server programs are completely different and have different names; for instance /usr/bin/telnet for the telnet client, /usr/sbin/in.telnetd for the server. For novices, the distinction is not always clear. If a Linux setup program asks 'shall I set up the ftp server?', users may think 'well eh.. yes, I certainly want to use ftp, so go ahead'. Often you are not even asked, and an ftp server is installed by default.

    One way of creating a private network is not to install servers at all, just clients. But that is too simple if you have a network at home connecting two or more computers. Inside your network you want to telnet from one machine to another, you want to run an internal mail service, etc. In other words you can't do without servers.

    What servers do is listen. They listen for a signal that says: I want your service. The signal (at least for the TCP-based services) is a special IP packet, called a SYN packet, that enters your computer and specifies the number of a service. For instance, the number of the telnet service (that the in.telnetd program, if it is running, listens to) is 23. These numbers are usually called 'port numbers'. If the in.telnetd program is not running, no one listens to SYN packets with the number 23. So, as they say, 'port 23 is closed'.

    Ports do not exist by themselves, like little doors in your computer that you can open or close. A port is open if a server listens to it. Otherwise it is closed. A TCP port comes into existence if there is a program which listens to it, and if not, it does not exist!

    How do the SYN packets get into your computer? In the case of heaven, a packet can get 'into' it in three different ways:

    • packets sent from earth come in via the Ethernet card (also called the eth0 interface); they are addressed to the fixed IP address 192.168.1.1.
    • packets from the outside world come in through the ppp link, or ppp0 interface; that also has an IP address, but is is not fixed. At every Internet session the ISP hands out a 'dynamic' address, valid for this session only.
    • packets can also be sent from heaven itself, addressed to itself as it were. This way of sending packets is often used for testing; the packets are addressed to the so-called loopback interface with address 127.0.0.1. The name localhost refers to this loopback interface, while the name heaven refers to 192.168.1.1. (This is a rather important point: names and IP addresses refer to interfaces, not to computers, although in daily usage this distinction is often blurred.)
    Now the key point is that servers normally listen for any packets with 'their' port number, no matter which way they enter the system. If we want to make a private network, offering no services to the outside world, we must somehow change this.

    It would be nice if all the server programs available on Linux systems had options specifying which interfaces they will listen to. In that case you could just tell all your servers never to listen to the ppp line, and you'd be all set. Hardly any security measures would be needed at all (tcpd, firewalls, etc.); you would only use them 'for good measure', as an extra precaution. Maybe this will happen at some time in the future, but at the moment only a few server programs have this (including the important cases of exim and samba). So we have to do several things to make our network private:

    1. it never hurts to follow the commonly-heard advice to 'close unneeded ports', in other words not to run servers that you do not need.
    2. for services that have the option, make them listen to internal interfaces (eth0 and, if necessary, loopback) only.
    3. the 'super-server' inetd (which is used for 'waking up' a lot of different servers on a Linux system) should be replaced by xinetd, which has the option to listen to internal interfaces only. NOTE: apparently, Red Hat Linux 7.0 installs xinetd by default.
    4. unsafe servers which cannot run from xinetd (like the print server lpd) should, where possible, be replaced.
    5. for the remaining difficult cases you need a firewall that blocks SYN packets from the outside. This could also be used for blocking unwanted UDP and ICMP access.
    6. for 'advanced' security, a few other possibilities suggest themselves. One is not to use IP masquerading and forwarding on your network.

    6. Removing unneeded services

    6.1 Unnecessary inetd services

    First, there are some services which run 'from inetd.conf'. Almost all Linux systems have a 'super-server' called inetd, whose job it is to listen to a lot of ports at the same time, and to 'wake up' services when needed. However, it will also wake up services which you do not need.

    Examples of unneeded services are:

    • the ftp server. I do not plan to serve any files to the outside world, and internally on my network I can use Samba (and smbfs) to transfer files. Not having an ftp server in no way stops you from running an ftp client when you want to, and to exchange files with the outside world. But most distributions, including Debian, install an ftp server by default.
    • the portmapper and anything related to RPC calls (same reason). The portmapper is used for allowing remote procedure calls. Basically you need this if you use NFS. But I do not use NFS (Samba, which I need anyway because there is a Win95 machine on the network, provides enough file sharing facilities). So anything related to the portmapper and RPC can be commented out from inetd.conf.
    • finger and ident. About the usefulness or otherwise of 'ident', opinions seem to be divided. I removed it, and did not suffer any ill effects.
    • several other obviously unnecessary things that can be started up from inetd.conf (like saft).
    • several services in inetd.conf which are used only for testing networks: echo, chargen (pronounced kargen, 'character generator'), discard, daytime, and time. The last two (in case you are worried) have nothing to do with timekeeping on your system; they are just services which will tell your system's time to others. You don't need them, nor any of the other 'test' services.
    These services can all be disabled by commenting out (putting a # character at the beginning of) the corresponding lines in /etc/inetd.conf, and restarting inetd (in Debian: /etc/init.d/inetd restart).

    6.2 Other unnecessary services

    If a service is not woken up by inetd, it runs independently, as a 'daemon' or background program.  Among the daemons which you may not need, apart from the portmapper (if run as a daemon) is a local nameserver (named, pronounced name-dee). There is no reason why a small home network should run such a thing. /etc/hosts and C:\windows\hosts files on your machines, and the addresses of your ISP's name servers in /etc/resolv.conf will enable address lookup on your network.

    There is usually a command to prevent a service from starting automatically upon boot-up, by removing it from the start-up directories; for instance I got rid of a tamagotchi server, automatically installed by Debian 2.1, by calling

    update-rc.d -f /etc/init.d/tama remove

    7. Securing wanted services: non-inetd

    Now we tackle services that we do want to use, but do not want to be visible to the outside world. Often there is some configuration option that will keep the service 'private'. Examples follow.

    7.1 X

    X was designed as a network-oriented window system, but in fact the network features are never used in most setups, and present a security risk. You can eliminate X's network capabilities by starting X with a command-line option: startx -- -nolisten tcp. Secure Design now no longer reports that 'X11 is open'.  To make this permanent, you can make an alias for startx in ~/.bashrc, /etc/profile, or some other good location, like this:
    alias startx="startx  --  -nolisten tcp"
    The -nolisten tcp command should really be in one of the X11 resource files, but so far I haven't found out which one. The 'alias' approach works in any case. To test, run (as root) netstat -pan --inet. X should no longer be mentioned. Of course it would be nicer if we could keep X's network abilities for the local network, only blocking them against outside access, but I couldn't find a way to do that.

    7.2 Samba

    In a Debian system, the configuration file for Samba is /etc/samba/smb.conf (on other systems, it may be /etc/smb.conf). When installing Samba, I chose 'let Samba run as daemons'; it did not work properly from inetd. Any lines referring to netbios (which is what Samba uses) in /etc/inetd.conf must therefore be commented out. Then in /etc/samba/smb.conf, section [global], I added
     bind interfaces only = True
     interfaces = 192.168.1.1
    After /etc/init.d/samba restart, the Samba daemons only listen to our home LAN. They are no longer visible to the outside world. Check with netstat -pan --inet, and by having the system scanned.

    7.3 Exim

    Exim is the mail server (or Mail Transport Agent, MTA) on my system. You may have something else (like sendmail or postfix) but then the same principle applies: your private mail agent should not listen to the outside world. People who send mail to you, and to the other users in your home, send it to mail accounts at the ISP (or to several mail accounts at different ISP's). You retrieve it from there using, e.g., fetchmail. People cannot send mail to your network directly.

    Exim turns out to have an option local_interfaces (which goes into the MAIN CONFIGURATION section of /etc/exim.conf). This is a list of (IP addresses of) interfaces that exim will listen to. This only works when exim runs as a daemon, independent of inetd. To set this up:

    • In /etc/exim.conf, in the MAIN CONFIGURATION section, enter a line:

    • local_interfaces = 192.168.1.1:127.0.0.1
      (apart from the local net, also loopback must be specified, otherwise fetchmail won't work; or you must call fetchmail -S yourmachinename).
    • Comment out the smtp line in /etc/inetd.conf .
    • In /etc/init.d/exim comment out the line exit 0 near the beginning, just after the line #usually this is disabled and exim runs from /etc/inetd.conf. This causes exim to run as a daemon after you call /etc/init.d/exim start. Letting exim run as a daemon means that you have to call /etc/init.d/exim restart after every change to exim.conf.
    Letting exim run as a daemon means that it consumes some cycles and some memory all the time. But as a bonus, exim's RETRY CONFIGURATION now works properly as well, which it never did when running under inetd.

    7.4 Junkbuster

    Junkbuster is an http proxy server which you can configure to keep out ads and other unwanted stuff. It works very well. In a Debian system it listens to port 5865, in other systems to port 8000. This is set in the file /etc/junkbuster/config. By default, junkbuster listens to all interfaces (in other words, to the whole world). However, you can set in the config file

         listen-address      192.168.1.1:5865

    and now only machines on our own network can connect to it (including the gateway machine that junkbuster runs on, heaven in this example, provided heaven is entered in the Netscape 'Preferences/Advanced/Proxies' menu, not localhost).

    7.5 Other (non-inetd) services

    The above examples are only examples. If your system runs other services outside inetd, check their documentation for ways to make them private. For instance, it appears that sendmail can be made to listen only to the local network by means of

         0 DaemonPortOptions=Addr=192.168.1.1

    in the sendmail.cf file. I did not try this.

    7.6 Remaining problem cases, like lpd

    lpd remains a problem. It cannot be made to listen to the internal network only. Basically, it should be replaced by something safe. Sander Plomp recommends replacing it by pdq. I've been too lazy to do this yet, but it certainly needs attention in the near future.

    Other problem cases may remain: servers which you need in your own network but which cannot be made private, and for which private alternatives do not exist. I have this problem with cannaserver, a system for inputting Japanese characters from the keyboard. Such services must be screened from the outside world by means of a packet-filtering firewall. See section 10 of this article.

    8. Masking the inetd services through xinetd

    By now the list of visible services on the system, according to the Secure Design scan, has become:
    • telnet
    • pop3
    • lpd (the print system)
    Much fewer than there used to be, but still far too much. Telnet and pop3 are started by inetd/tcpd and thus are secured by hosts.allow and hosts.deny, but I'm not sure that this protection is 100% cracker-proof. lpd remains totally 'open', as far as I can guess, unsecured. It cannot, I think, be started from inetd.

    8.1 Replacing inetd

    Security for a home network really requires replacing inetd by something which can distinguish between requests for service from the local network and from the outside. Plomp recommends tcpserver; I tried xinetd. First kill inetd, then install xinetd. Important: The Debian script /etc/init.d/xinetd not only starts the xinetd daemon by itself, but also the portmapper. We do not need/want the portmapper, which is used for RPC calls and NFS, which we do not use. So anything related to the portmapper in /etc/init.d/xinetd must be commented out (# at the beginning of the line).

    One way to configure xinetd for telnet and pop3 is to put in /etc/xinetd.conf:

    defaults
        {
          instances = 10
          log_type = SYSLOG daemon
          log_on_success += DURATION HOST USERID
          log_on_failure += HOST
          interface = 192.168.1.1
        }

    service telnet
        {
          socket_type = stream
          wait  = no
          user  = root
          server = /usr/sbin/in.telnetd
        }

    service pop-3
        {
          socket_type = stream
          wait  = no
          user  = root
          server = /usr/sbin/in.qpopper
        }

    So apart from a general 'defaults' section which specifies the interface, there is a separate section for each service that you want to run. Although the format is completely different, the data for the various sections can be found in your existing inetd.conf. See also man xinetd.conf.

    I started xinetd and verified that is is now possible to telnet to heaven both from heaven itself and from earth. However, Secure Design no longer reports that my system has open telnet and pop3 ports! Success! NOTE: from my own machine, telnet heaven succeeds, but telnet localhost does not. xinetd can only bind to one interface; in this case 192.168.1.1, not at the same time to localhost, which is the loopback interface (127.0.0.1).

    By now, all other services in /etc/inetd.conf have been commented out. Therefore inetd no longer does anything and we can get rid of it in the boot-up scripts. In Debian, it goes like this:

    update-rc.d -f /etc/init.d/inetd remove
    Its place is taken by xinetd:
    update-rc.d xinetd defaults
    OK; another step towards security successfully taken.

    The output of netstat -pan --inet is now something like:

    heaven:~# netstat -pan --inet
    Active Internet connections (servers and established)
    Proto  Local Address     Foreign Address  State   PID/Program name
    tcp    127.0.0.1:25      0.0.0.0:*        LISTEN  11391/exim
    tcp    192.168.1.1:25    0.0.0.0:*        LISTEN  11391/exim
    tcp    192.168.1.1:139   0.0.0.0:*        LISTEN  10761/smbd
    tcp    192.168.1.1:5865  0.0.0.0:*        LISTEN  1670/junkbuster
    tcp    192.168.1.1:110   0.0.0.0:*        LISTEN  161/xinetd
    tcp    192.168.1.1:23    0.0.0.0:*        LISTEN  161/xinetd
    tcp    0.0.0.0:515       0.0.0.0:*        LISTEN  148/lpd MAIN
    udp    192.168.1.1:138   0.0.0.0:*                10759/nmbd
    udp    192.168.1.1:137   0.0.0.0:*                10759/nmbd
    udp    0.0.0.0:138       0.0.0.0:*                10759/nmbd
    udp    0.0.0.0:137       0.0.0.0:*                10759/nmbd
    raw    0.0.0.0:1         0.0.0.0:*         7      -
    raw    0.0.0.0:6         0.0.0.0:*         7      -

    Almost all services now listen to a local interface. The print system is the exception: it listens to address 0.0.0.0 (i.e., everywhere) on port 515. Sure enough, if the system is scanned now, only port 515 is reported as 'open'. In fact some Windows-oriented scan services will report your system as totally 'closed', because they do not scan port 515.

    9. What about IP Masquerading?

    For ages I have used IP Masquerading, to give the Windows box in my home access to the Internet. Or so I thought. When recently, in the course of my investigations into system safety, I switched off Masquerading, the Windows box could use the Internet as before.

    What happened? Simply that earth, the Windows machine, only does three Internet-related things:

    1. e-mail, which does not require earth to connect to the Internet, only to the smtp and pop3 servers running on heaven;
    2. Web browsing; for this, earth connects to junkbuster on heaven, again not directly to the Internet;
    3. fetching the outside mail from the ISP; for this, earth's user telnets to heaven and runs fetchmail from there.
    So, ever since I installed junkbuster about six months ago, earth has never approached the Internet directly, and Masquerading is now superfluous. I had not realized this. Inadvertently I had created a 'proxying firewall'. This means that Masquerading can - and must - now simply be switched off. This has several advantages:
    • Simplicity: ipchains (see the next section) no longer has a FORWARD chain, so we don't have to worry about it. We do not need to set up DNS on earth (entering nameserver addresses).
    • Security: if ever some Trojan becomes established on earth, it will not be able to contact its evil accomplices through the Internet.
    The only 'downside' is that earth is now restricted to e-mail and http only. No ping and telnet to the outside world, no ftp, Real Audio, chat, etc. But for the moment mail and http is all that's required. If other services become necessary on earth, I suppose I shall have to install proxies for them on heaven.

    To switch IP masquerading and forwarding off, in Debian, you do

    • change the first line in /etc/network/options so it says ip_forward=no
    • disable forwarding in the kernel by means of  echo 0 > /proc/sys/net/ipv4/ip_forward
    • remove any special commands for Masquerading, like ipchains -A forward -s 192.168.1.0/24 -j MASQ from your startup scripts, ip-up script, or wherever you had them.

    10. Closing the last doors

    Let's recapitulate: by eliminating services, reconfiguring other services so they don't listen to the gateway interface, by wrapping others inside xinetd, and by turning off Masquerading, we have created a system which is already quite secure without a firewall. Now it is time to add the final touch: we build a packet-filtering firewall around the system using ipchains. This should be the last step, not the first.

    One often reads the advice to configure ipchains in such a way that 'everything is blocked by default', and then to make exceptions for the things that you want to allow. Theoretically this may be the right thing, but in practice it leads to much frustration. If everything is blocked, your system will basically not work. You are more or less groping in the dark when it comes to deciding what you have to allow. So I allow everything by default, and then add restrictions one by one. If the system breaks (e.g. no ping, or no Web page viewing) the last restriction has been too drastic, and must be undone. Setting the default policy of a chain to DENY or REJECT can then (again) be the last step, not the first.

    I started by taking down the firewall (ipchains -F) and then running a simple firewall script with one rule:

    #!/bin/sh
    # simple firewall

    ipchains -F input
    ipchains -P input ACCEPT
    ipchains -A input -i ppp0 -p TCP --syn -j DENY -l

    This blocks SYN packets coming from the outside interface, enhancing the privacy of the system very considerably. Nobody from the outside can start a connection; outside scan services report that the site is completely closed (some now even call it 'stealthed' or 'invisible'). But we can add more restrictions. That is, we can use more general DENY/REJECT rules, and more specific ACCEPT rules.

    Before you add restrictions, it is useful to do some experiments. You can make ipchains-type rules which let packets through while logging them (-j ACCEPT -l). So even if (like me) you do not really know which packets to block, you can see what is going on 'normally' by keeping a window open with tail -f /var/log/syslog in it. Then afterwards you can make rules to block packets which are not 'normal'. I strongly advise you to do your own experiments, and to make rules based on your own understanding.

    After a few such experiments, my firewall script in /etc/ppp/ip-up.d looks as follows. This assumes you have no nameserver running, but have the addresses of TWO nameservers provided by your ISP in /etc/resolv/conf. Mind the important backquotes (`)! They may disappear if you cut-and-paste from this page.

    #!/bin/sh
    # A slightly more complicated firewall

    # Find external name server addresses
    ns="`grep nameserver /etc/resolv.conf | awk '{print $2}'`"
    nameserver1="`echo $ns | sed -e 's/ .*//'`"
    nameserver2="`echo $ns | sed -e 's/.* //'`"

    # Set up INPUT rules
    ipchains -F input
    ipchains -P input ACCEPT

    # Block outside input from reserved address ranges
    ipchains -A input -i ppp0 -s 10.0.0.0/8      -j DENY
    ipchains -A input -i ppp0 -s 172.16.0.0/12   -j DENY
    ipchains -A input -i ppp0 -s 192.168.0.0/16  -j DENY

    # Block TCP connections from the outside
    ipchains -A input -i ppp0 -p TCP --syn -j DENY -l

    # Block all UDP except nameserver replies
    ipchains -A input -i ppp0 -p UDP -s $nameserver1 53 -j ACCEPT
    ipchains -A input -i ppp0 -p UDP -s $nameserver2 53 -j ACCEPT
    ipchains -A input -i ppp0 -p UDP -j DENY -l

    # Allow (for now) but log all ICMP
    ipchains -A input -i ppp0 -p ICMP -j ACCEPT -l

    # From local net, allow only packets to us and broadcasts
    # Forwarding is off, other packets won't go anywhere, but
    # now we can log them to detect illegal activity on our net
    ipchains -A input -i eth0 -d 192.168.1.1   -j ACCEPT
    ipchains -A input -i eth0 -d 192.168.1.255 -j ACCEPT
    ipchains -A input -i eth0 -j REJECT -l

    # Set up OUTPUT rules
    ipchains -F output
    ipchains -P output ACCEPT

    # Don't send packets out to reserved address ranges
    ipchains -A output -i ppp0 -d 10.0.0.0/8      -j REJECT
    ipchains -A output -i ppp0 -d 172.16.0.0/12   -j REJECT
    ipchains -A output -i ppp0 -d 192.168.0.0/16  -j REJECT

    # Block all UDP except nameserver requests
    ipchains -A output -i ppp0 -p UDP -d $nameserver1 53 -j ACCEPT
    ipchains -A output -i ppp0 -p UDP -d $nameserver2 53 -j ACCEPT
    ipchains -A output -i ppp0 -p UDP -j REJECT -l

    # Allow (for now) ICMP to the outside, but log
    ipchains -A output -i ppp0 -p ICMP -j ACCEPT -l

    # We do not have FORWARD rules; forwarding is off

    Such a firewall (which you should adapt to your personal tastes and needs) will provide an extra 'shell' around the system. But basically, the security of your system should not depend on the firewall; if only because firewalls are complicated things, and it is far too easy to make mistakes with them. Many other things can be done first to ensure the privacy of your network.

    11. References and further reading

    1. Amateur Fortress Building in Linux, part 1, by Sander Plomp (rootprompt.org)
    2. Amateur Fortress Building in Linux, part 2, by Sander Plomp (rootprompt.org)
    3. Real World Linux Security, by Bob Toxen (Prentice-Hall, 2001).
    4. What is the difference between REJECT and DENY? (Linux@home)
    5. Ipchains log format (Linux@home). For understanding what you see while running tail -f /var/log/syslog.
    6. ICMP Type numbers (IANA)
    7. Setting Up Mail for a Home Network Using Exim (Linux Gazette, July 1999)
    8. Experiments with SMTP (Linux Gazette, September 1999)


    Copyright © 2001, Jan Stumpel.
    Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 65 of Linux Gazette, April 2001

    "Linux Gazette...making Linux just a little more fun!"


    Speeding Up Your Net Browsing with PDNSD Domain Name Caching

    By Sunil Thomas Thonikuzhiyil


    1. Where to find this document
    2. About PDNSD
    3. Installation
    4. Sample configuration file
    5. Tweaking configuration files
    6. FAQs
    7. Credits

    1. Where to find this document


    http://geocities.com/sunil_tt/pdnsd.txt

    2. About PDNSD.

    DNS is is the Domain Name System. DNS converts machine names to the IP addresses that all machines on the net have. Name serving on Unix is commonly done by a program called named. This is a part of the ``BIND'' package which is coordinated by Paul Vixie for The Internet Software Consortium.

    PDNSD is a caching DNS proxy server. Unlike BIND, it saves the RAM cache to a file and the same is read up by PDNSD for the next Dial-Up session. BIND when acting as a cacheing nameserver on your local Dial-Up machine stores/caches the name to number translation data in your RAM. This is not written back to the hard-disk upon disconnection 'coz it is not intended for a user/site who/which is not always connected to the Net.

    PDNSD can be configured to speed up Net Surfing on a Dial-Up connection. Since DNS resolution is referenced from the cached file, time is not wasted on the name to number lookup. This in turn speeds up the name to number translation, which actually accelerates your surfing.

    PDNSD is distributed under the GNU/GPL and is available for download at: http://home.t-online.de/home/Moestl/

    Redhat RPMS are at:

    http://home.t-online.de/home/Moestl/

    Debian DEBS are at:

    ftp://ftp.debian.org/debian/pool/main/p/pdnsd/pdnsd_1.1.2.a-2_i386.deb

    3. Installation.

    Download pdnsd-<version>.tar.gz from the above source.

    Decompress and untar using

    tar zxvf pdnsd-<version>.tar.gz
    Change directory to pdnsd-<version> and type
    $ ./configure
    Configure script accepts a number of parameters, see manual.txt file supplied with the PDNSD source. Command line parameters --prefix and --with-distribution are interesting.
    $ ./configure --help
    will list all options

    I am assuming that you have not specified any command line options. Makefile generated by configure will have the following defaults: (It is worth taking a look at the generated Makefile)

    Default installion directory for PDNSD is /usr/local (can be changed with --prefix option to configure). Default location of the PDNSD cache is /var/cache/pdnsd. PDNSD configuration file pdnsd.conf will be found in /etc.

    Now type:

    $ make
    This will compile pdnsd. I did not face any problem compiling it on both Debian 2.2 and Redhat 6.1. Next su to root, as installation requires root previleges. Then type:
    # make install
    This step will do the following (quoted from pdnsd manual.txt):
    1. Copies pdnsd to $(prefix)/sbin/

    2. Copies pdnsd-ctl to $(prefix)/sbin/

    3. Copies docs/pdnsd.conf (a sample configuration) to /etc/ (and backs up /etc/pdnsd.conf to /etc/pdnsd.conf.old). If you have an /etc/pdnsd.conf.old you do not want to be overwritten, save it to another place/name before doing 'make install'

    4. Creates your cache directory if it is not there. After installation, you should check the file permissions and edit /etc/pdnsd.conf to fit your needs . If you use the run_as option, please make sure that your cache directory is owned by the user you specified with this option! Please note that the permission issue has been fixed as of the latest releases. Now /usr/local/sbin will contain two binaries pdnsd and pdnsd-ctl. 'pdnsd' is the proxy DNS daemon and 'pdnsd-ctl' is a program to control the cache.

    The cache is located at /var/cache/pdnsd/pdnsd.cache.The cache file size will be 4 initially and will grow as and when you browse.This aspect of the /etc/pdnsd.conf viz. 'perm_cache=<value>;'. By default it is set as 512(KB).Increase it according to your judgement and a safe value would be 2048(KB) for a machine having 64MB RAM. The cache file size will be 4 bytes initially and will grow as and when you browse. Cache growth will be observed only after a reboot or after restart of the PDNSD daemon. This is due to the fact that PDNSD saves the RAM cache upon exit only.

    PDNSD must be started up each time you boot the system . For this, you have to install start up scripts. rc folder of the source distribution contains startup scripts for Redhat, SuSE and Debian. I have not tested the SuSE scripts.

    Do the following depending on your distribution.

    3.a) Debian GNU/Linux.

    Copy pdnsd-{version}/src/rc/Debian/pdnsd to /etc/init.d and type update-rc.d Stop bind if you have it installed on your system. Edit /etc/resolv.conf and add the following.
          nameserver 127.0.0.1
     

    Comment out entries for all other name servers. Start pdnsd by typing /etc/init.d/pdnsd start. Test pdnsd by typing nslookup. On my system it diplays:

         Default Server: debian
         Address: 127.0.0.1
         >
    

    Stop pdnsd by typing /etc/init.d/pdnsd stop.
    Fire up your editor and add a line like this to the end of your /etc/hosts file:

         127.0.0.2    testhost
    

    Save the file and start pdnsd once again. Type nslookup. Inside nslookup type 'testhost'.

        > testhost
          Server: debian
          Address: 127.0.0.1
          Non-authoritative answer:
          Name: testhost
          Address: 127.0.0.2
    
    If this answer is obtained it shows that your pdnsd is working (remember to remove the last line from /etc/hosts)

    3.b) Redhat Linux


    Copy pdnsd-{version}/src/rc/Redhat/pdnsd to /etc/rc.d/init.d Stop bind if you have it installed on your system. Edit /etc/resolv.conf and add the following

        nameserver 127.0.0.1
     

    Comment out entries for all other name servers. Start pdnsd by typing /etc/rc.d/init.d/pdnsd start. Test pdnsd by typing nslookup. On my system it diplays

        Default Server: Redhat
        Address: 127.0.0.1
        >
    

    Stop pdnsd by typing /etc/rc.d/init.d/pdnsd stop.
    Fire up your editor and add a line like this to the end of your /etc/hosts file.

     
        127.0.0.2    testhost
    

    Save the file and again start pdnsd. Type nslookup. Inside nslookup type 'testhost'.

      > testhost
      Server: Redhat
      Address: 127.0.0.1
      Non-authoritative answer:
      Name: testhost
      Address: 127.0.0.2
     

    If this answer is obtained it shows that your pdnsd is working (remember to remove the last line from /etc/hosts)

    4. Sample configuration file.

    My pdnsd.conf looks like this
    global {
     perm_cache=2048;
     cache_dir="/var/cache/pdnsd";
     max_ttl=204800;
     run_as="nobody";
     paranoid=on;
     server_port=53;
     server_ip="127.0.0.1";
    }
    server {
     ip="202.54.6.5";
     timeout=260;
     interval=900;
     uptest=none;
     ping_timeout=500;
     purge_cache=off;
     caching=on;
    }
    server {
     ip="202.54.1.30";
     timeout=260;
     interval=900;
     uptest=none;
     ping_timeout=500;
     purge_cache=off;
     caching=on;
    }
    server {
     ip="202.9.128.6";
     timeout=260;
     interval=900;
     uptest=none;
     ping_timeout=500;
     purge_cache=off;
     caching=on;
    }
    source {
     ttl=86400;
     owner="localhost.";
     serve_aliases=on;
     file="/etc/hosts";
    }
    /*
    rr {
     ttl=86400;
     owner="localhost.";
     name="localhost.";
     a="127.0.0.1";
     soa="localhost.","root.localhost.",42,86400,900,86400,86400;
    }
    rr {
     ttl=86400;
     owner="localhost.";
     name="1.0.0.127.in-addr.arpa.";
     ptr="localhost.";
     soa="localhost.","root.localhost.",42,86400,900,86400,86400;
    } */

    This is a sample working configuration (DNS servers are of VSNL an Indian ISP).You must edit servers section of pdnsd.conf to suit your needs.(Fill in DNS servers of your ISP aginst IP entry). Start PDNSD once more and connect to the Internet. Type nslookup and do a query for say, yahoo.com. The server will respond somthing like:

    > yahoo.com
    Server: debian
    Address: 127.0.0.1
    Non-authoritative answer:
    Name: yahoo.com
    Addresses: 204.71.200.245
    Stop PDNSD and disconnect from the Internet. Start PDNSD again and query for yahoo.com through nslookup. If you are geting the same answer as above , fine have a coffee and relax. Else if, there is something wrong and ....??

    5. Tweaking configuration files.

    If you are using BIND as your primary nameserver, one can very well make PDNSD the secondary one. But here you have Catch-22 situation, on which local IP and port would you make PDNSD listen ? Look at ragOO's pdnsd.conf file and named.conf file:

    [pdnsd.conf]

    global {
    perm_cache=2048;
    cache_dir="/var/cache/pdnsd";
    max_ttl=604800;
    run_as="nobody";
    paranoid=off;
    server_port=53
    server_ip="127.0.0.2";
    }
    [named.conf--relevant section only]
    options {
    directory "/var/cache/bind";
    forward first;
    forwarders {127.0.0.2;202.54.6.1;202.54.1.30};
    };
    ragOO's GNU/Linux machine has local (lo) IP addresses from 127.0.0.1 to 8.This is the same in all GNU/Linux systems and one has the option to specify 127.0.0.2 to be the alternate local server. PDNSD listens on Port 53 and note that 127.0.0.2 is the first forwarder in named.conf. This means that your machine/BIND looks up the PDNSD cached records for a number match of the address you/the client program has requested, if not there then it queries the DNS resolvers of your ISP; in order it goes.

    6. FAQs.

    The follwing question and answer is from the correspondence I had with Thoams Meostl author of pdnsd.

    Q. I had some problem with your default installation. The cache was not growing. It was stuck at 4 bytes. I changed permissions to 'nobody' and it started growing. Probably a problem with my configuration. Will you please let me know the correct file permissions for /var/cache/pdnsd and /var/cache/pdnsd/pdnsd.cache ?

    A. The best thing is to give the user who runs pdnsd write permissions to the cache directory (and of course to the cache file): chown <user> /var/cache/pdnsd chmod 0700 /var/cache/pdnsd chown <user> /var/cache/pdnsd.cache chmod 0600 /var/cache/pdnsd/pdnsd.cache

    Where the permissions can of course be more liberal, if you want. The ones given are the minimum required permissions. The default permissions "make install" sets on the files are also OK. The only important thing is to chown the file. Normally, "make install" should also chown the cache file (maybe a bug? If it didn't for you, please drop me a mail).

    7. Credits.

    Thanks to the author of this nifty utility, Thomas Moestl for clarifying certain points and doubts. He made me a better user of PDNSD :-) !

    Thanks to Manoj Victor Mathew and Raghavendra Bhat (ragOO) for mentioning about 'pdnsd' during one of the ILUG-Cochin meets. ragOO edited and modified the draft heavily and encouraged me to keep on modifying the draft.

    Last but not the least, to all users of this elegant program who may have found this rant useful. Enjoy....;


    Copyright © 2001, Sunil Thomas Thonikuzhiyil.
    Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 65 of Linux Gazette, April 2001

    "Linux Gazette...making Linux just a little more fun!"


    The Back Page


    About This Month's Authors


    Shane Collinge

    Part computer programmer, part cartoonist, part Mars Bar. At night, he runs around in a pair of colorful tights fighting criminals. During the day... well, he just runs around. He eats when he's hungry and sleeps when he's sleepy.

    Fernando Correa

    Fernando is a computer analyst just about to finish his graduation at Federal University of Rio de Janeiro. Now, he has built with his staff the best Linux portal in Brazil and have further plans to improve services and content for their Internet users.

    Rahul Joshi

    I am a final year computer engineering student from the Government College of Engineering, Pune, India. I have been using Linux for about 2 years. I have also contributed to the Linux Documentation Project by maintaining the Linux Swap Space mini HOWTO. I was introduced to PVM and MPI during my final year project and I have implemented some brute force programs of our project on the PARAM 1000 Supercomputer at the Center for Developement of Advanced Computing, University of Pune, using both the PVM and MPI Libraries.

    Ned Lilly

    Ned Lilly is vice president of hacker relations for Great Bridge, a company formed to promote, market and provide professional support services for PostgreSQL, the open source database, and other open source business solutions. He can be reached at ned@greatbridge.com.

    Mark Nielsen

    Mark works at ZING (www.genericbooks.com) and GNUJobs.com. Previously, Mark founded The Computer Underground. Mark works on non-profit and volunteer projects which promote free literature and software. To make a living, he recruits people for GNU related jobs and also provides solutions for web/database problems using Linux, FreeBSD, Apache, Zope, Perl, Python, and PostgreSQL.

    Ben Okopnik

    A cyberjack-of-all-trades, Ben wanders the world in his 38' sailboat, building networks and hacking on hardware and software whenever he runs out of cruising money. He's been playing and working with computers since the Elder Days (anybody remember the Elf II?), and isn't about to stop any time soon.

    Pradeep Padala

    I am a software engineer at Hughes Software Systems. I love hacking and adore Linux. I graduated this year with a B.E (equivalent to B.S) in Computer Science and Engineering. My intersets include solving puzzles and playing board games. I can be reached through p_padala@yahoo.com or http://pradeeppadala.homestead.com.

    Dustin Puryear

    Dustin is a professional working in the Information Systems industry. He is author of "Integrate Linux Solutions into Your Windows Network," as well as numerous articles for both paper- and online publications. He may be reached at dpuryear@usa.net.

    Jason Steffler

    Jason is a Software Architect for McHugh Software International.  His computer related interests include: OO programming & design, Smalltalking, the peopleware aspects of software, and noodl'n around with Linux.

    Jan Stumpel

    Jan lives in Oegstgeest, The Netherlands. He has been a Linux user since 1995. At the moment he is trying to get a Debian installation just right.

    Sunil Thomas Thonikuzhiyil

    I teach Computer Science at College of Applied Sciences Calicut,India. I have been hooked on linux since 1996. I have a Masters in Computer Science from Cochin University. I am interested in all sorts of operating systems. In my free time I love to listen to Indian classical music.


    Not Linux


    World of Spam

    Some of the funnier spams found in the Gazette mailbox.

    Why not OWN your very own profit center gazette?

    Design it to advertise the program(s) you want to see placed up front! Tired of advertising for others? gazette, don't settle for a small part of the pie...

    gazette, YOU can have it ALL!! gazette, receive a monthly check for referrals generated from your very own profit center! Allow the system to auto- matically send out a personalized message on your behalf...


    I have a domain name for sale that will be exceptionally good for your business: www.midwifes.com.

    Your business will be seen as being more successful. There is prestige in owing a Dot Com Domain.

    Your business will be seen as being more creditable. A Dot Com domain gives your business a more established feel.

    [Doesn't he know that midwives is spelled with a "v"? Or is it intentionally misspelled because www.midwives.com is already taken?]

    [This one isn't spam but it's funny anyway.]

    Indonesia's First Penguin Meets Kangaroos

    Trabas, the first Indonesian Penguin, will meet the Kangaroos at The Linux Business Expo & Open Source Conference on 7-9 March 2001 at Sydney Convention & Exhibition Center, Australia.

    [Trabas makes an Internet Account Management & Billing System (IAMBS) for ISPs and other Internet-based service providers, as well as other software products.]

    You are receiving this E-mail because you signed up for Big Daddy up dates and you showed interest in motorcycles or motorcycle products. If you are looking for an E-Zine for bikers look no further!


    Do you have an interest in Affordable Personal Alcohol Detectors??? Please visit our Drunk Driving Prevention Center at...


    This is not considered SPAM.

    [By whom?]

    You and I have not met, but because you?re a respected business professional with an interest in improving employee productivity, I would like to offer you a free preview of the Professional Selling SkillMap(tm).


    I found your address on a site about wine and spirits, cigar and good living. X is a virtual club for all those interested in wine in both a professional and personal capacity. You too can be among our 6992 members to receive our free weekly bulletin.

    [And you too can be among the 6048 who receive an lg-announce message every month.]

    We are responding to your request for FREE analysis of your site... We feel there is very substantial potential to promote your site on the Internet...

    Please REPLY to this email and include your full name, telephone number and URL.

    [What?! They are responding to us and they're asking for our URL? I thought they already had it....]

    FIRST, I MUST SOLICIT YOUR CONFIDENCE IN THIS TRANSACTION... YOU HAVE BEEN RECOMMENDED BY AN ASSOCIATE WHO ASSURED ME IN CONFIDENCE OF YOUR ABILITY AND RELIABILITY TO PROSECUTE A TRANSACTION OF GREAT MAGNITUDE INVOLVING A PENDING BUSINESS TRANSACTION REQUIRING MAXIMUM CONFIDENCE.

    WE ARE TOP OFFICIALS OF THE FEDERAL GOVERNMENT CONTRACT REVIEW PANEL WHO ARE INTERESTED IN IMPORTATION OF GOODS INTO OUR COUNTRY WITH FUNDS WHICH ARE PRESENTLY TRAPPED IN [West African country].

    THE SOURCE OF THIS FUND IS AS FOLLOWS : DURING THE REGIME OF OUR LATE HEAD OF STATE, [name], THE GOVERNMENT OFFICIALS SET UP COMPANIES AND AWARDED THEMSELVES CONTRACTS WHICH WERE GROSSLY OVER-INVOICED IN VARIOUS MINISTRIES... WE HAVE IDENTIFIED A LOT OF INFLATED CONTRACT FUNDS WHICH ARE PRESENTLY FLOATING IN THE CENTRAL BANK OF [that country].

    HOWEVER, DUE TO OUR POSITION AS CIVIL SERVANTS AND MEMBERS OF THIS PANEL, WE CANNOT ACQUIRE THIS MONEY IN OUR NAMES. I HAVE THEREFORE, BEEN DELEGATED AS A MATTER OF TRUST BY MY COLLEAGUES OF THE PANEL TO LOOK FOR AN OVERSEAS PARTNER INTO WHOSE ACCOUNT THE SUM OF US$25,000,000.00 (TWENTY FIVE MILLION UNITED STATES DOLLARS) WILL BE PAID BY TELEGRAPHIC TRANSFER. HENCE WE ARE WRITING YOU THIS LETTER.

    WE HAVE AGREED TO SHARE THE MONEY THUS:
    1. 70% FOR US (THE OFFICIALS)
    2. 20% FOR THE FOREIGN PARTNER (YOU)
    3. 20% TO BE USED IN SETTLING TAXATION AND ALL LOCAL AND FOREIGN EXPENSES.

    IT IS FROM THIS 70% THAT WE WISH TO COMMENCE THE IMPORTATION BUSINESS.

    PLEASE NOTE THAT THIS TRANSACTION IS 100% SAFE AND WE HOPE THAT THE FUNDS CAN ARRIVE YOUR ACCOUNT IN LATEST TEN (10) BANKING DAYS FROM THE DATE OF RECIEPT OF THE FOLLOWING INFORMATION . A SUITABLE NAME AND BANK ACCOUNT INTO WHICH THE FUNDS CAN BE PAID.

    THE ABOVE INFORMATION WILL ENABLE US WRITE LETTERS OF CLAIM AND JOB DESCRIPTION RESPECTIVELY. THIS WAY WE WILL USE YOUR COMPANY'S NAME TO APPLY FOR PAYMENTS AND RE-AWARD THE CONTRACT IN YOUR COMPANY NAME.

    [Now let me get this straight. You expect the total taxes and fees to add up to only 20%? But the US government will want 50% or more in taxes; where will the other 30% come from? And 70 + 20 + 20 adds up to 110%, not 100%. Where will that money come from? And I'm supposed to put my business' reputation on the line for you? What do you want to import anyway, drugs? And finally, whose money is this anyway? If it was government overpayment then the money belongs to the taxpayers of your country and should be returned to them. It's not there for you skim the pork off your civil service job and get rich on the backs of your countrymen.]

    Dear Friend and Future Millionaire...


    Mystery Shoppers Needed! GET PAID to shop at your favorite stores...


    My name is [name] and I came upon your site and think there is a GREAT opportunity for us to partner. At [site] you can create your own branded/private label travel website with your own banners, logo's, custom design, and graphics for FREE.


    begin 644 Happy99.exe
    M35I0``(````$``\`__\``+@`````````0``:````````````````````````
    M``````````````````````$``+H0``X?M`G-(;@!3,TAD)!4:&ES('!R;V=R
    M86T@;75S="!B92!R=6X@=6YD97(@5VEN,S(-"B0W```````````````````` 
    
    [This one got Your Editor really riled up and he wrote, "F*** you, [name]. We don't need no stinkin' viruses!" Then Ben calmed me down by reminding me that it was probably sent without the querent's knowledge.

    In any case, I can't believe the idiodicy of uuencoding a Windows virus. Most Windows machines don't have a uudecoder installed. Or does MS Outlook uudecode?" ]


    Dear webmaster,
    Your name was given to me by a colleague who thought you would be interested in this special opportunity since you are in the bulk email/internet marketing business.

    [I am?]

    [Unlike the other messages above, which are bona fide e-mails I received, this is one Rory Krause and I made up.]

    Looking for a desktop OS to go with your Linux servers? How about Microsoft Windows? Your office staff will love the familiar user interface. Your tech-support people will no longer have to answer the question, "What's that funny window key on the keyboard for?" And best yet, it's is Samba-comaptible!!


    Happy Linuxing!

    Michael Orr
    Editor, Linux Gazette, gazette@ssc.com


    Copyright © 2001, the Editors of Linux Gazette.
    Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 65 of Linux Gazette, April 2001