Newest Posts

Still secure with an easy-to-guess password

What are the chances someone will launch a dictionary attack against an account of yours? For most of us, it’s probably pretty low – at least, it’s likely that no-one will spend the time and resources to try something like this:

  1. login: davidwhite, password: aardvark
  2. login: davidwhite, password: abacus
  3. login: davidwhite, password: abalone
  4. login: davidwhite, password: zoom
  5. login: davidwhite, password: zoo
  6. login: davidwhite, password: zucchini

etc. That’s pretty expensive for an attacker, requiring potentially hundreds of thousands of attempts, all just to gain access to one account.

It’s much more likely is that an attacker will instead take the list of most popular passwords and attack multiple accounts:

  1. login: aaron, password: iloveyou
  2. login: abigail, password: iloveyou
  3. login: adam, password: iloveyou
  4. login: zane, password: iloveyou
  5. login: zeke, password: iloveyou
  6. login: zoe, password: iloveyou

Even if only a 0.1% of users use “iloveyou” as a password, an attacker who attempts to login to 100,000 valid accounts stands to gain access to 100 of them this way.

What’s the lesson? Don’t use a common password, naturally. But Microsoft has come out with an interesting idea – allow users to pick whatever password they want, but limit the number of times a single password can be used. So perhaps only ten people are allowed to use “iloveyou” as a password – the next person who tries is told they must choose something else.

Disadvantage: Users now face the inconvenience of having not only to make sure their chosen login is unique, but potentially their password as well.
Advantage: Simple non-strong passwords (i.e. not filled with crazy enforced combinations of mixed-case letters, numbers and symbols) can be allowed without compromising security. Combined with a policy like Twitter’s that bans the most common passwords altogther, a system can be safeguarded against the easy sort of dictionary attack against multiple accounts I showed above.

An interesting idea… although Microsoft is noncommittal about whether even they will be implementing it. It will be interesting to see if any sites put the idea into practice, and whether the advantages outweigh the disadvantage for users.

Programmers! I say to you now, knock off all that laziness!

(Didja get the reference in the title?)

One of the cardinal rules in making good user interfaces is don’t make the user do something that your program can do automatically for them. One of the most obvious failures to follow this principle is the Credit Card field in shopping carts. For instance, Crutchfield is an excellent store with the best phone customer service of any company I know of. And yet when you try to buy something from their website, you’re given this requirement:

Come on, people! There is no reason to require people to type in credit cards in the way you want them to. If they want to use spaces, dashes, or both, then let them! Take the time to write one extra line of code that will strip those spaces and dashes out in your program – don’t force people to conform to your standards for your convenience. Good interface design is about making the customer happy, not the programmer.

Here, I’ll even do it for you, in Perl:

$creditcard =~ s/[ \-]//g;

and in PHP:

$creditcard = preg_replace("/[ \-]/","",$creditcard);

Accept input liberally – that’s one mark of a well-designed user interface.

More on safely ignoring security advice

It’s common for computer professionals to laugh at (or perhaps curse) the stupidity of computer users. (“How can they be such idiots?? Why would anyone ever click that attachment? or visit that website? or believe that spam?)

But I think that, most of the time, that’s unfair. Most users just want their computers to work. Do you have to understand how an engine works in order to drive a car? Do you have to know anything about digital cable or satellite transmission protocols in order to change the channel on your TV? So why do we try to force PC users to become PC and network security experts when all they want to do is send e-mails and make PowerPoint presentations?

Picking up where I left off, I wanted to look a bit more at the whitepaper “So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users“. Not only are users not becoming better security experts – it turns out that they’re pretty much doing fine ignoring our advice, thank you very much. The occasional mishap that occurs because a user ignores safe-browsing tips is far outweighed by the benefits of not having to worry about all the things we tell them to worry about.

Let’s look at a few examples.

Passwords

What users are told to do: Change your password often. Use a mixture of letters and numbers and symbols. Make it 8 characters or longer. Don’t use the same password on more than one site. Don’t re-use an old password. Don’t write it down. Don’t use words in the dictionary.

What users actually do: Use our dog’s name as a password. Add a “1″ to the end of it the next time we’re forced to change the password.

The tragic result of our ignoring security advice: Nothing.

Look, it’s no good telling me “But my brother-in-law’s neighbor’s mechanic had his password cracked because he just used ‘abc123′ for a password!” Yes, it will happen now and then. But it’s never happened to me. And probably not to you either. In fact in all my desktop support work, I’ve never had a client whose account was hacked because someone guessed their password.

And because it doesn’t happen very often, users can enjoy the benefits of a bad password (namely, ease of remembering) with the assurance that probably nothing bad will happen to them. Because it never has.

Phishing Awareness

What users are told to do: Pay attention to the URL of a site you’re visiting. Watch out for URLs like “http://10.42.12.94″. Or “www.paypal.com.evilsite.com”. Or “wwwpaypal.com”. Or “www.paypa|.com”. Or “www.evilsite.com/www.paypal.com/index.php”.

What users actually do:: Click on links in e-mails and merrily go wherever they take them.

The consequences: Usually they go to the right site, because the links in most e-mails are harmless and go exactly where users think they’re going. If a user up on a phishing site that steals our bank account info, they complain to the bank, who refunds their money thanks to their policy of not holding customers accountable for fraudulent transactions.

The dangers are a little more pronounced here – I have met a number of people who have fallen for phishing scams. But none of them have had any permanent losses. Of those who had their bank accounts hacked, the bank refunded them their money. Result: the need to become an expert on URL formatting is greatly reduced, because the consequences (though irritating and inconvenient) are seldom catastrophic.

SSL Security Awareness

What users are told to do:Don’t do any shopping or banking on insecure sites. Look at the address bad and see if there is an “s” after the “http” part but before the “://” bit. Make sure there is a padlock on the page (though not in the “favicon” area, and not in the body of the web page, because those don’t count. The padlock can be found in a very specific place depending on your browser’s version and vendor.) Do not trust sites with self-signed or mismatched certificates. Failure to do this can result in your visiting phishing sites that will steal your account information.

What users actually do: Never, ever look at the address bar to see if they’re on an SSL site or not. Ignore any warnings about mismatched SSL certificates.

The horrible result: Nothing. Nothing bad happens. Ever.

Have you ever found a hacker site that tried to present itself as “www.paypal.com”, only to have its plan foiled because it had a mismatched SSL certificate that your browser warned you about?

Me neither. Nor have I ever gone to a site with a mismatched SSL certificate that was dangerous in the slightest. (“WARNING!! This website’s certificate expired 2 days ago! You must get out of here, right now!! Yeah, whatever.)

Nor can I think of ever reading about credit card numbers (or any other sensitive information) being stolen because a man-in-the-middle managed to steal someone’s private data as it was being transmitted unencrypted over the Internet from the user’s browser to a legitimate website. And chances are, even it has happened, it’s never happened to anyone you know. So most people decide… why bother worrying about it?

Conclusion

The point of this isn’t that security advice is wrong. Or even ill-advised. But those of us who do any sort of computer support need to realize that most users are getting along just fine while ignoring all the good things we tell them to do. Now, they may really want to do the right thing, and make themselves safe online, but the more we load them down with tips, advice, and education about how their browser (or the Internet) works, the more likely they’ll simply smile and nod at us, then go back to doing what they did before.

My friend Martin did an excellent job of showing the alternative – give users easy things they can do (preferably one-time things they can do, then forget about) that protect them from 90% of the attacks coming their way. For instance

  • Worried about hacking attempts on your computer’s open ports? Buy a router at Best Buy and plug your computer into it. You’re done! (The NAT-routing neatly hides your PC in a private LAN behind the router)
  • Want to make a secure password? Try an acronym. e.g. “security is not too hard 4 me” = sinth4m.
  • Want to avoid phishing sites? Don’t click on links in e-mail messages ever. That should do it.

No URL scrutinizing. No Zone Alarm to download. No anti-phishing toolbars to keep an eye on. These are the sort of things people can easily remember, and hopefully do without changing the way they work on the computer.

Users aren’t idiots. And they’re not even really lazy or unconcerned about security. They just 1) are usually given too many “tips” to remember, and 2) they don’t often experience any negative consequences when they ignore those tips. So the easier the advice is to follow, and the more relevant it is to actual dangers that users face, the more likely they are to follow it.

Poisson Rouge Hints

Well, this blog is nothing if not eclectic.

First, if you have children age 3-6 or so, you really ought to visit Poisson Rouge. It’s a huge site full of games, and all of them are fairly intuitive. (In fact, there are no instructions anywhere for the games.  You figure them out as you go along. It’s kind of like Baby’s First Myst).

Look at all the stuff to click!Look at all the stuff to click!

Anyway, so… there’s a section of Poisson Rouge called Rycroft Park. (It’s out the window, on the main screen). The park is full of 46 more games, but there are also eight “fruits” hidden somewhere within the park. (I call ‘em “fruits”, because each one will be found on a tree that has that fruit’s shape). You don’t have to find them all, of course, to enjoy the games… which is good, because some of them are tough. Some are so well-hidden that our 3-year-old couldn’t find the last one, and her 5-year-old sister couldn’t help her… and after 20 minutes, neither could Mommy!  It was time to call in Daddy.

Man, this park looks funDon’t cry, honey — Daddy will find that missing heart fruit.

I’d like to say that my puzzle solving skills, finely-tuned after 15 years of adventure gaming, easily solved this puzzle designed for preschoolers.  But I can’t.  I’d like to say I was man enough not to cheat.   But no… after about 20 minutes, with bedtimes for the children overdue and sad faces looking up at me, I sighed, gave up, and turned to Google, that fount of all game cheat information.

…except I couldn’t find the answer there either! Somehow, in all the thousands of pages that have been written to guide players through every single game ever written, this sweet little infuriating scavenger hunt in a game for children appeared to have been missed.

So when I finally did find the last fruit (another 20 minutes after I’d put some sad children to bed), I decided to remedy that. So here, for any other parents (or children!) looking to complete their fruit collection, is the location of each fruit in Rycroft Park. (Some are still hard to find even with the hints… I’m not giving everything away!)

  1. Diamond fruit: in the red balloon game (eastern edge of the park)
  2. Fish fruit: in the Puppet Show game (south-west)
  3. Square fruit: in the Racing game (north)
  4. Heart fruit: in the Ice Cream game (south-east)
  5. Round fruit: in the Chicken game (south)
  6. Triangle fruit: in the Bicycle game (north)
  7. Star fruit: in the Yellow Rowboat (lake)
  8. Teapot fruit: in the Fish game (lake)

Safely Ignoring Security Advice

The NY Times has a mostly helpful article about the security of different methods of payment – credit cards vs. debit cards vs. PayPal. It also has a variety of common tips on how to be safe online. And to be sure, their tips are mostly accurate.

But is anyone going to actually do what they suggest?

For instance:

There are a few precautions everyone should take. First, look for signs of quality security at sites you use, like logos, or seals, from security providers like VeriSign or McAfee… To check that a seal is legitimate, click on it to make sure it takes you to the verification page of the security service.

“Signs of quality security”? Does anyone ever do this? Can you even find one on, say, Amazon.com? Dell.com? Walmart.com?  And if so, have you ever paid attention to it?  And if you haven’t, are you going to now?  Are you really going to click on it and make sure it takes you to verisign.com and not to ver1sign.com?  Are you going to tell you parents to do that too when they’re shopping online?

While you’re doing that, don’t forget to clarify this part to them:

SSL encryption, which is indicated by the “s” in “https” in the address bar and a padlock icon in the lower right-hand corner of the browser, is your best insurance against theft of your data while it’s being transmitted.

… except the padlock isn’t in the bottom-right corner of Internet Explorer 8. It’s next to the address bar. Oh, and make sure it’s to the right of the address bar, because if it’s to the left of the address bar, it could be a favicon. What’s a favicon? Well, you see, a website can put an image of a padlock on the left of the address bar, but they can’t do it on the right. Oh, and don’t trust a padlock icon if you see it in the body of the web page. What’s the body of the web page? Well, you see, there’s a different between the top part of the web browser and the bottom part. The page you’re visiting can’t control the top part of the web browser (except the part to the left of the URL), but it can control the lower part.

Just remember all that, mom and dad. And don’t forget the “s” after http! Then you’ll be safe thanks to encryption!

And since shady sites can use encryption, too…

Oh, right! Sorry, I guess you’re not safe yet.

…also check the address bar for a bit of green or the site owner’s name written in green. (Recent versions of major browsers all now use green in some way to indicate the existence of another layer of security called an extended validation SSL certificate). It indicates that the site you’re visiting has been vetted and belongs to a legitimate company; it is not a phishing site. You will certainly see green on larger e-commerce sites and on bank sites.

What’s that? You say that our local credit union Truliant and MidCarolina community bank don’t have any green up in the address bar? Well, then they must be shady sites! Better not do business with them anymore… just to be safe. (I’m sure it has nothing to do with the fact that the “green” SSL sites cost 4X as much to buy and add absolutely no additional encryption security.)

I’ll spare you the rest of the tips (Buy security software! Update piece of software you own! Get password-management software!) I fully acknowledge that none of this is bad advice. But is anyone going to actually do it? I certainly never click on Verisign links to validate a site, and while I’m vaguely aware of SSL now and then, I buy stuff unquestioningly from Amazon, NewEgg, etc. without giving even a fleeting glance at the address bar to ensure I haven’t somehow mysteriously been whisked off to an evil identity-stealing website.

But what if you do all this? What if you pound these tips into your parents so that they reluctantly become security experts just so they can buy a set of towels from Target.com? At least you’ve saved them from financial ruin, right? At least they’ll have the last laugh when their unwitting, non-security-conscious friends get their card stolen, right??

Here’s the good news about online payments: There is little to worry about using credit cards online, because the risk of loss from unauthorized charges, by law, is almost nil.

And with that, the urgency for security is almost completely swept away. What will you lose if someone steals your card and buys something online? Nothing at all. Oh, you’ll have to dispute the charges within 60 days, and you’ll need to get new cards, but you will lose exactly $0.

So… become a security expert and you will avoid any financial losses. Or, continue to ignore security practices… and you’ll also avoid any financial loss, other than a bit of inconvenience.

That’s why it’s so hard to get users to pay attention to security – because it almost never hurts them to ignore security advice.

That’s the conclusion of the excellent whitepaper “So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users“. I’ll explore that article a bit more in a future posting.

Testing anti-spam methods

These days I wouldn’t dream of using a mailto: tag on my websites. That’s been a bad idea for probably 13 years or so, when spammers realized they could pretty easily get a collection of valid e-mail addresses just by scouring websites and pulling out anything that looked like somebody@example.com .

But we still need ways to contact each other. And so an arms race has been going on for over a decade, with developers creating ingenious ways to allow real people to contact them while foiling spammers, and spammers coming up with better tools to get by those safeguards.

So which anti-spam methods really work? Well, let’s try them out!

Listed below are several real e-mail addresses, each one protected in a different way. My hope is that tons of spam harvesters will visit this page. On the backend, I have a program monitoring each mailbox. When a new message arrives, the program will dutifully note its arrival, increment a counter, then delete the message. At the bottom of this post, a graph shows how many messages have come into each mailbox since this posting went live.

Method 1 – unprotected

What the user sees: nobodyhere1@davidwhite.org
What the code looks like:

<a href="mailto:nobodyhere1@davidwhite.org">
  nobodyhere1@davidwhite.org
</a>

Our “control” e-mail address is this one – a plain, ordinary, circa-1997 mailto: tag with no protection whatsoever.

  • Pros: Easy!
  • Cons: Easily harvested. Will get spam in a matter of days.

Method 2 – HTML Entities

What the user sees: nobodyhere2@davidwhite.org
What the code looks like:

<a href="mailto:nobodyhe&#114;&#101;&#50;&#64;davidwhite&#46;org">
  nobodyhe&#114;&#101;&#50;&#64;davidwhite&#46;org
</a>

Here we’ve replaced some of the characters in the e-mail address with their HTML Entity equivalents. (for instance “r” becomes &#114; , “e” becomes &#101; , etc.)

  • Pros: Excellent compatibility within all browsers. A human-readable and clickable “mailto” link is visible, but appears obfuscated within the HTML source code.
  • Cons: This is a very simple, unsophisticated encoding method. Any automated spam tools could be easily programmed to convert HTML entities back into standard characters.

Method 3 – Image

What the user sees: [This GIF is an e-mail address]
What the code looks like:

<img src="http://www.example.com/emailaddress.gif" />

The e-mail address above is a GIF, not plain text. (Here it is.)

  • Pros: Since it’s an image, it’s not harvestable by spammers using tools that look for e-mail addresses within the text of an webpage. Using OCR software to convert the image into text is probably not worth the spammers effort, and therefore won’t be done.
  • Cons: Image is not “clickable”, because a mailto: link would expose the e-mail address in text format. E-mail address cannot be copied-and-pasted. Anyone visiting this page with images not displayed will not see the address at all.

Method 4 – JavaScript

What the user sees:
What the code looks like:

<script type="text/javascript">
<!--
var address1='nobody';
var address2='here4';
var address3='davidwhite.org';
document.write('<a href="ma'+'ilto:');
document.write(address1+address2+'@'+address3);
document.write('">');
document.write(address1+address2+'@'+address3);
document.write('</a>');
// -->
</script>
<noscript>
  <img src="http://www.example.com/emailaddress.gif" />
</noscript>

Here the e-mail address and its link are generated by JavaScript. This can be programmed in any number of ways – here, I simply break-up an e-mail address into three parts, then concatenate them back together (and do the same thing with the “mailto:” tag, just for good measure).

  • Pros: Generates a fully-working, clickable “mailto:” tag. E-mail address can be copied-and-pasted. The actual text of the e-mail address does not appear anywhere in the page’s source code. Assumes that most e-mail harvesting software does not use JavaScript to actually render a page, but simply parses the source code looking for e-mail addresses.
  • Cons: Won’t work if the user has JavaScript disabled in their browser. (In that case, we can still use the <NOSCRIPT> tag to give those users something useful – perhaps an image of an e-mail address as described in method #3)

Method 5 – Reversing text direction via CSS

What the user sees: gro.etihwdivad@5erehydobon
What the code looks like:

<span style="unicode-bidi: bidi-override; direction: rtl;">
  gro.etihwdivad@5erehydobon
</span>

This is a neat one, for browsers that support the CSS2 “unicode-bidi” and “direction” properties. The CSS code takes the letters within the “span” tag and reverses them. Voilà – readable text for a human, backwards gibberish for spam harvesters that aren’t using CSS to render the webpage.

  • Pros: Works in any browser that supports CSS2 (I tested as far back as Internet Explorer 6 and Firefox 1, and I’m guessing even Internet Explorer 5 might support it). No JavaScript required. Address can be copied-and-pasted, but does not legibly appear in the page’s source code.
  • Cons: Can’t include a clickable “mailto” tag. E-mail address appears reversed in browsers that don’t support CSS2.

Method 6 – Forms

What the user sees:

Enter your comments below to e-mail them to me.

What the code looks like:

<form action="mailer.php" method="post">
Enter your comments below to e-mail them to me.
<textarea name="comments" rows="3" cols="50"></textarea>
<input type="submit" value="E-mail these comments" />
</form>

Here is the safest solution of all – and the least user-friendly. The e-mail address is completely hidden, safely tucked within the mailer.php script that will handle the form submission.

  • Pros: Your e-mail address is never publicly exposed. Your e-mail address can also be updated in the mailer program without having to make any website HTML changes.
  • Cons: Requires programming skill to write a mailer application. Users cannot use their own mail program to send you a message (or send an attachment, or keep a copy of their message in their “Sent” folder, or retrieve it later, etc.) Spammers now submit junk to forms in addition to e-mail addresses, in hopes of reaching you that way. Therefore, even though your e-mail address remains hidden, you may still get a ton of spam through the form.

Method 7 – Forms with CAPTCHAs

What the user sees:

Enter your comments below to e-mail them to me.

Type in the text you see here: [This GIF has a secret word!]

What the code looks like:

<form action="mailer.php" method="post">
Enter your comments below to e-mail them to me.
<textarea name="comments" rows="3" cols="50"></textarea>
Type in the text you see here:
<img src="http://www.example.com/captcha.gif" />
<input type="text" name="captcha" size="10" />
<input type="submit" value="E-mail these comments" />
</form>

As I mentioned, spammers have started filling out all kinds of forms on web pages, wherever they can, in hopes of getting their spam sent to you or (better yet) automatically posted on a blog (maybe in the “Comments” section). This solution adds a CAPTCHA element to the form – a visual element that should be readable by a human but hard to read by a spambot. If the mailer program detects that the secret word was not entered correctly, the comments in the form are rejected.

  • Pros: Your e-mail address is never publicly exposed. Form spam is greatly reduced, making it more likely that only legitimate mail can get through.
  • Cons: CAPTCHAs can be difficult even for humans to solve, leading to frustration and annoyance. Fairly complex to setup.

Method 8 – Forms with JavaScript

What the user sees:

Enter your comments below to e-mail them to me.

What the code looks like:

<form action="mailer.php" method="post"
  onsubmit="this.magicfield.value='GOODFORM';">
<input type="hidden" name="magicfield" value="">
Enter your comments below to e-mail them to me.
<textarea cols="40" rows="3" name="comments"></textarea>
<input type="submit" value="E-mail these comments" />

Another variation to keep spammers from filling out your forms with junk is to use JavaScript to slightly tweak the form before it gets submitted. In this example, a hidden field called “magicfield” is intentionally left blank. When the form is submitted, a small piece of JavaScript code will give “magicfield” a value of “GOODFORM”. When the mailer program processes the form, it will reject any submissions that do not have a “magicfield” value of GOODFORM. (The idea here is that spammers submit form spam with automated tools that ignore the JavaScript code, and therefore fail to tweak the form in the necessary way.)

  • Pros: Your e-mail address is never publicly exposed, and form spam is greatly reduced. No CAPTCHAs or other annoyances are presented to legitimate users. (In fact, the entire JavaScript tweaking process is invisible to them.)
  • Cons: Won’t work if the user has JavaScript disabled in their browser.

Conclusion

With each method of hiding your e-mail address, you have to decide:

  • How effective is this going to be? i.e. how likely is it that spammers will develop a tool to de-obfuscate an e-mail address hidden is this manner?
  • How compatible is this technique? How many people are using web browsers that don’t support this (JavaScript, CSS, etc.) method I’m using?
  • How annoying is this technique? Am I making it overly difficult for legitimate users to contact me?

Personally, I like method #4. I use a variation of Syronex’s address-obfuscating JavaScript, so I can give my visitors real “mailto:” links and still support non-JavaScript users with the <NOSCRIPT>

Yeah, but do any of these methods actually work?

Well, here’s where I hope that spammers can help us find out. In time, this page should be crawled and harvested by spambots, and they’ll use whatever methods they have to de-obfuscate the e-mail addresses on this page. By monitoring how many e-mails are received by each address, we can get an idea of which methods are safely hiding e-mail addresses, and which aren’t.

Here’s the latest graph showing how many spam e-mails have been received. The less spam a method gets, the (hopefully) more effective it is.

[Current Spam Count]

The route to nowhere

One of the Linux servers I administer is a multihomed box – one DSL line, 3 static IP addresses.  Let’s call em:

  • 172.16.8.50
  • 172.16.8.51
  • 172.16.8.52

Everything was fine on it back when we had SDSL, which I loved for its fast upload speed but our ISP hated. Apparently SDSL is dead, and when their central-office equipment breaks they can’t get any replacement parts.  So eventually we switched to ADSL, and that was when the Route to Nowhere problem started.

Occasionally, for no apparent reason, our ADSL modem will suddenly forget that we have 3 IP addresses.

www.davidwhite.org$ ping 172.16.8.50
PING 172.16.8.50 (172.16.8.50) 56(84) bytes of data.
64 bytes from 172.16.8.50: icmp_seq=1 ttl=60 time=4.74 ms
64 bytes from 172.16.8.50: icmp_seq=2 ttl=60 time=13.7 ms
64 bytes from 172.16.8.50: icmp_seq=3 ttl=60 time=5.02 ms

www.davidwhite.org$ ping 172.16.8.51
--- 172.16.8.51 ping statistics ---
10 packets transmitted, 0 received, 100% packet loss, time 9003ms

www.davidwhite.org$ ping 172.16.8.52
--- 172.16.8.51 ping statistics ---
10 packets transmitted, 0 received, 100% packet loss, time 8998ms

My early fix for this problem was annoyingly kludgey:

  1. Reconfigure the server as only having a single IP address (e.g. 172.16.8.50)
  2. Make an outbound connection to somewhere (e.g. www.google.com)
  3. Now make an inbound connection to a website associated with 172.16.8.50.  It will work.
  4. Repeat steps 1-3 with the other two IP addresses
  5. Reconfigure the server back to its multihomed state with 3 IP addresses.  Everything will work as it should.

I now have a better solution – generate gratuitous ARPs.  I don’t know why this should be necessary, but at least I have something I can automate.  I grabbed and compiled arpsend and can use this script to automatically send a gratuitous ARP request (from one of the server’s IP addresses back to itself) followed by an ARP reply directed to the ADSL modem in hopes it will notice and fix the Routes to Nowhere.  So far, so good.

(In this script, 172.16.8.49 is the ADSL modem and 12:34:56:78:9A:BC is the MAC address of the Linux server’s ethernet card)

# Gratuitous ARP request
$ arpsend -T ff:ff:ff:ff:ff:ff -t 172.16.8.50

# ARP reply to the ADSL modem
$ arpsend -o 2 -E 12:34:56:78:9A:BC -S 12:34:56:78:9A:BC \
    -s 172.16.8.50 -t 172.16.8.49

# Gratuitous ARP request
$ arpsend -T ff:ff:ff:ff:ff:ff -t 172.16.8.51

# ARP reply to the ADSL modem
$ arpsend -o 2 -E 12:34:56:78:9A:BC -S 12:34:56:78:9A:BC \
    -s 172.16.8.51 -t 172.16.8.49

# ARP reply to the ADSL modem
$ arpsend -T ff:ff:ff:ff:ff:ff -t 172.16.8.52

# ARP reply
$ arpsend -o 2 -E 12:34:56:78:9A:BC -S 12:34:56:78:9A:BC \
    -s 172.16.8.52 -t 172.16.8.49

A winter coat

PC owners are often surprised to learn that their PCs will grow a thick, luxurious coat in the winter months.  This PC I was fixing was a bit overdue for shearing.

All warm and furry

Broken CPU fan clips? No problem!

So, I need to backup a little bit from where I was.  My PC had been slowly getting more and more cantankerous.  We had settled into the following routine:

  1. Turn the PC on.  Note how it doesn’t beep, doesn’t POST, and doesn’t show anything on the screen.
  2. Leave it like that (on and running but not booting) for about half an hour.
  3. Come back and reboot it.  Hope that it’s shaken off its grogginess by now and is ready to actually boot.

Finally the day came when it wouldn’t boot at all – not after half an hour, not after half a day of being on.  I hate problems like that – nothing on the screen to help you, no desperate beep codes from the motherboard trying to send a coded message – nothing.  When a PC won’t boot at all, it’s usually either

  • the CPU,
  • the motherboard, or
  • the power supply (PSU)

Fortunately I was able to find a working Intel Core 2 Duo PC with parts I could “borrow”.  Eventually, swapping the PSU, CPU and RAM back and forth between PCs, I found the problem – a bad stick of RAM.  Putting my PC back together with the one remaining stick, it worked.  Success!

Except for one problem.  Intel Core 2 Duos ship with a heatsink/fan combination that must be securely fastened onto the CPU.  It works well, but isn’t really designed for repeated installation and removal.  When I removed the CPU from my own PC during the testing, the tiny plastic clips had gotten bent.  And no matter how carefully I tried to put them back into the hole on the motherboard, they just bent further.  Until finally… *snap!*

My Intel fan/heatsink

Plastic clips on 3 of the 4 feet are fine

But this one is broken

Unfortunately, three working clips wasn’t going to cut it.  If all 4 clips aren’t jamming the heatsink down onto the CPU hard, the temperature of the CPU will rise by 30°C or more, putting it way into the danger zone.

With no easy way to replace the broken clip, I needed a new heatsink and fan. Was my computer going to be out of commission for 5 days or so until NewEgg could ship me a replacement?  All because of a little broken plastic clip?

Or was there some other way to jam the heatsink onto the CPU?

I took a look at my case.  Running horizontally along the length of the case, a little above the CPU, was a horizontal stabilizer bar.  And the CPU fan itself had a small area on the plastic casing above the fan where something could possibly be attached. Hmmmm.

I scrounged around and found a 7″ block of wood. Using that and a piece of cardboard for a shim, I found I was able to jam the block of wood behind the stabilizer bar and “hook” it carefully onto the CPU fan’s plastic casing. The wood cleared the spinning fan blades by less than ¼” . The wood block, once wedged into place and duct-taped (!) to the case, could not be touched again. Any change in its angle would reduce the pressure on the fan/heatsink and cause the CPU to begin heating up. And of course if it fell off altogether, the CPU would fry itself.

Wooden block, fan and CPUIt was the craziest kludge I’ve ever done.

And it worked!

The PC was carefully put back in place (without the cover – it wouldn’t fit with the wooden block jutting out) and the children were shoeed away from it.   My biggest fear now was that the block would shift or fall off when we weren’t paying any attention.  So I installed Core Temp and set it to shut the PC down automatically if the CPU temperature got too hot.

Within a week, my new Masscool fan/heatsink had arrived (this one uses screws and a metal plate – no more flimsy clips!) and the wooden block was retired with a sigh of relief.

Solving computer problems and living to tell about it.

One tiny little *snap*.  And with that, I knew I was in trouble.  The little plastic clip had finally broken off, and the heatsink could no longer be attached.   Now what?

Oh – sorry.  I’m getting ahead of myself.  Hello! Welcome to “Near-Fatal Knowledge”, my diary of computer problems, solutions, and how getting those solutions has nearly killed me repeatedly.  I’m David White -  programmer, web designer, sysadmin, PC repairman and general “computer guy”.  Over the years, I’ve learned a lot of neat things about technology.  A lot of that knowledge is how to fix things.  And a lot of that has come because I’ve broken something in a new and inventive way, and now I have to fix it.

Learning how to fix computer-related problems is a an ordeal by fire.  Sometimes it’s easy.  And sometimes I have to scour message boards, fill out bug reports, and bang my head on a wall in order before I finally discover the one magic thing that will fix the problem or the one magic line of code that will make the program work.

If I’m smart, I’ll write down whatever it was that nearly cost me my life (or sanity) to learn.  Because if I don’t, the chances are good that one day I’ll encounter the problem again, and say to myself “Wait, that looks familiar!  What did I do to fix that last time?  I know there some trick to this – what was it??  Gaaaah!”

“Near-Fatal Knowledge” is my attempt to document some of those problems, hopefully in a way that’ll entertain you.  And maybe – just maybe – I’ll save someone’s life.

Oh, right  – the broken heatsink clip.  I’ll get back to that in the next post.