Category Archives: Web Development

Craigslist blocking Feedly again

It looks like craigslist is once again blocking Feedly from accessing its RSS feeds. This happened a few months ago when Feedly traffic apparently got high enough to hit an automated block on craigslist’s end. They made some changes to reduce their traffic, and things started working again… until now. As far as I know there’s no official word from craigslist, but it seems likely that Feedly has simply grown to the point where they’re hitting the block again.

Unfortunately this means that at the moment RSS feeds from craigslist (like the ones from our RSS Feeds Tool) are not updating on Feedly. Most likely they will get this sorted out in the next few weeks, but if you don’t want to wait, there are a couple options. First off, other popular readers, such as NewsBlur and TheOldReader appear to be working for now, although as people move over from Feedly it likely won’t be long before they’re having the same problems.

A long-term solution, although with a bit more effort, would be to host your own RSS server. If you like NewsBlur, you can actually self-host it on your own server (or home computer), which will not only avoid getting caught up in these kinds of blocks, it will also save you the annual subscription fee. Code an installation instructions can be found on Github here.

Another popular option with a bit more detailed installation instructions is TinyTinyRSS. There’s a primer on MakeUseOf here. Whichever reader you choose, self-hosted or not, you should be able to import the OPML files generated by SearchTempest’s RSS Feeds Tool. Let us know how it goes in the comments!

Edit: It sounds like a number of people have been having trouble lately with self-hosted RSS as well. This thread at TheOldReader might shed some light. Apparently craigslist recently made a change to how they redirect RSS urls, which TheOldReader says isn’t supported by many other readers. So if your RSS reader isn’t picking up craigslist feeds, you might want to ask them to look into that.

Maintenance

Quick tip for other businesses out there. Regardless of what business you’re in, or what kind of website you have, if your site goes down for “maintenance” every night, you’re doing it wrong.

Oh, and if your website is “closed” during non-business hours, you’re really doing it wrong, although it appears the standard culprits there are government agencies.

(Sheesh, don’t people know 2am is a prime working hour for us entrepreneurs? ;))

Standardized password-change path

Heartbleed is making me realize how much the internet needs a standardized way to change your password on websites. Right now if you want to change your password on a given site, the process looks something like

Go to homepage > Search for login option > search for account/settings/profile/options section > search for password change prompt > oh, looks like “account” was wrong… try “profile” > hmm.. nope… maybe it’s in a submenu somewhere > gah.. maybe I can google where their password change page is…….

How great would it be if there was a standard like www.example.com/password, the place to change your password on any website?

 

How Namecheap is preventing thousands from reaching our site

Until yesterday, searchtempest.com had used namecheap.com for both domain registry and DNS. They are one of the least expensive registrars out there that isn’t named GoDaddy, and generally have a good reputation, so this seemed like a reasonable choice. And DNS came free with domain registration, so we didn’t see any need to look elsewhere.

That was until several recent complete outages of their DNS servers. Now we don’t blame namecheap for that. Their business isn’t distributed DNS, and they certainly didn’t DDoS themselves. However, it did demonstrate our need for a more robust solution.

We settled on DNS Made Easy. They appear to provide a very robust, globally distributed, fast, user-friendly, and inexpensive solution. But this post isn’t about them. It’s about what happened when we tried to switch from namecheap’s internal DNS servers to the ones from DNS Made Easy.

The right way to transfer DNS is pretty straightforward, but it’s important that it be followed to avoid apparent downtime. Generally nameserver records (the locations of the nameservers themselves) are cached for 24 hours. So, when you want to change your nameservers without downtime, you just follow these steps:

  1. Configure the new nameservers with all necessary records.
  2. Point the domain at the new nameservers.
  3. Wait 48 hours* for the cache period to expire.
  4. Remove the records from the old nameservers.

*or whatever the TTL of the NS records is

This process is explained pretty succinctly in the first section here, for example:

But pay attention to the fact, that the NS records of your parent DNS servers are usually cached for 48 hours. Thus you should keep your old nameservers online for at least 48 hours after making the changes to your NS records.

The problem is, at namecheap, when we performed step #2, they immediately did #4: removing our records from their DNS. That means anyone who has accessed the site within 48 hours suddenly has a stale cache and is unable to get there again, unless they know to flush their dns, or wait 48 hours. (And if it’s their ISP that cached the DNS info, they have no choice but to wait.)

I immediately contacted Namecheap support, hoping that they could reinstate our records for the remainder of the 24 hour period, but they repeatedly gave me the canned (and incorrect) response that downtime is inevitable with DNS transfers, and I should simply wait 24 hours (apparently oblivious to the fact that a 24 hour outage of a busy website is kind of a big deal, and the fact that their NS records actually had a 48 hour TTL).

Eventually, after two fruitless rounds with namecheap “tech” support, I was able to establish that they should have preserved our records, and that it is in fact their policy to do so for a period of 5 days. However I now couldn’t convince them that this had not, in fact, happened.

Finally, with a bit of help from DNS Made Easy (which appears to have very competent tech support), we figured out the problem. Namecheap has two sets of nameservers, which they call “DNS v1” and “DNS v2”. The problems we had a couple weeks ago were with v2, so we switched to v1 at that point, while we sought out a more permanent solution. However, when we transferred yesterday, they preserved our records on their v2 servers (which we haven’t even been using for weeks!), but not on v1 where they need to be. I was finally able to explain this to the third namecheap tech I spoke to, who told me that the v1 servers are controlled by a separate provider, and there must be a problem on their end. She apparently sent them a ticket.

That was now 13 hours ago, with no resolution. I apologize profusely for the inconvenience users of searchtempest.com are suffering. Hopefully it’s some consolation that I’m at least as frustrated myself. If you’re unable to access www.searchtempest.com, you could try flushing your DNS cache. The easiest way to do that is to restart your computer. If that doesn’t help, unfortunately the only options are to call your ISP and ask them to flush the nameserver records for searchtempest.com from their cache, or to wait until the cache expires – potentially until tomorrow afternoon.

Otherwise, all I can do at this point is warn others to avoid the same pitfall. Go ahead and use namecheap for domain registration, but switch to an external DNS immediately, before your website has traffic. It is easily worth a few bucks a month to avoid these kinds of problems. If you’re already using their DNS services, it should be possible to transfer out without downtime, but make sure you’re on v2 before transferring out. And good luck.

Update:

I just followed up with Namecheap tech support for the fourth time, to ask why our records still haven’t been restored on their partner’s web servers. Unfortunately it sounds like the response they got from their partner was almost identical to the canned response they repeatedly gave me:

When you change nameservers for a domain name, these changes are not accepted instantly all over the world. It may take up to 24 hours (in rare cases more) for local ISPs to update their DNS cache, so that everyone can see your website. Since the caching time varies between ISPs, it takes time for DNS changes to be totally in effect. Unfortunately this process cannot be influenced or sped up because of its automated nature.

Once again ignoring the real problem. We know DNS propagation is not instantaneous. But if they leave the records on the old nameservers until the TTL (time to live) of the old NS (nameserver) records has passed, everyone will still be able to access the site while the propagation takes place. What’s more, according to at least one of the tech support reps I spoke with, that is in fact their policy. It’s becoming clear though that the cache period will have expired long before I will be able to find someone willing and able to make the 10 second change that would fix this problem.

Skype Click to Call makes Firefox painfully slow

I’ve been having some troubles with Firefox lately, particularly when trying to use gmail. It’s been painfully slow (10-15 seconds just to change folders), and locking up the whole browser in the process. I decided to finally take my own advice and go through the troubleshooting steps we suggest people use when they’re having similar problems with SearchTempest.com or AutoTempest.com.

I found that if I restarted Firefox in safe mode (with all add-ons disabled), things were once again nice and snappy. Went from 10s+ to switch labels in gmail to around one second. Sweet. At first I figured the culprit must be something gmail-related, like PowerBot or Gmelius. (And I was pretty choked. Losing Gmelius wouldn’t be a big deal, but PowerBot is a huge productivity boost for me.) Fortunately though, the real culprit was an add-on I never use and didn’t even install: Skype Click to Call. It appears that this add-on is installed automatically when you install Skype. It’s probably an opt-in of some sort, and I imagine I must’ve figured it would be useful at the time, but for some reason, they failed to mention how it would make the browser an order of magnitude slower…

Anyway, it’s gone now, and gmail (and everything else) is fast once again!

How to prevent Google hammering server for old linked CSE specifications

Google’s Linked CSE is a fantastic tool. It allows you to dynamically generate a custom search engine for each of your users, or even for each individual visit, based on any parameters available to your application. This functionality has been invaluable for SearchTempest.com as we use custom search engines to provide customized multi-city searches of craigslist (no affiliation).

The problem with this approach is that when you create a Google Custom Search Engine (CSE) with a linked specification file on your server, Google’s “FeedFetcher-Google-CoOp” bot requests that file in order to build the CSE. It then continues to regularly request the file, repeatedly, for at least a matter of months afterward, even if it is never again used by an actual user.

In our case, it got to the point where the majority of all requests for files from our web server were for useless, outdated Google CSE specification files. Unfortunately, once this is happening, it appears there is no way to stop it. The best you can do is to add a rule either the web server or, ideally, the firewall level to block these requests. (Currently we return a 410 ‘gone’ response in as few bytes as possible.)

However, there is a way to avoid getting into this situation in the first place. In short, Google CSE specification files should be served from disposable subdomains. For example, create a subdomain called gcrefs1. For convenience you can point it at the same directory as your main (www) site. In your CSE setup, tell Google to access the file at http://gcrefs1.example.com/filename. Then, after a period of time (once Google’s Feedfetcher bot is making too many requests to the file for your liking), simply create a new subdomain (say, gcrefs2), update your references to point to the new domain, and then remove the DNS entries for the old one.

Of course, it’d be nice if Google’s feedfetcher just respected robots.txt, or reacted properly to 410 responses, but given the usefulness of Custom Search Engines in general, I’ll take what I can get.

Update: It appears that Google ignores 410 responses, but not 301 responses. So by 301 redirecting an outdated cref file to null.html (for example), you should be able to convince them to stop requesting it. (Although the bot will run through each of its saved sets of request arguments one last time, since it sees each as a completely separate file.)

Google not indexing craigslist – SearchTempest switches to Bing

As of February 28, Google has stopped indexing new craigslist posts. Or more specifically, every day between about 5pm and midnight PST, they index them as usual. Then at midnight, they throw them all away. So anyone searching Google for craigslist posts over the past couple weeks has been faced with a giant gap since the beginning of March.

SearchTempest has no affiliation with craigslist, so until recently, we used Google to power our searches. Since Google is no longer getting the job done though, we’ve switched to Bing!

To be honest, Bing’s API doesn’t hold a candle to Google Custom Search. You can’t sort by date, specify a list of urls to search (Google’s ‘annotations’), or even reliably search within the url at all. (Bing does have a semi-hidden option, instreamset:(url):{text}, which is similar to Google’s inurl:{text}, but we’ve found it to be unreliable.)

That said, through some clever manipulation of query strings and a mess of hard-coded special cases, we’ve managed to come up with a Bing-powered craigslist search that’s quite functional. If you’re frustrated by not being able to search craigslist through Google like before, give it a try!

NextDesk Terra Electronic Adjustable Height Desk Review

Since I spend a good chunk of my time hacking on SearchTempest and AutoTempest, and a good chunk of the rest of my time playing Starcraft II and such, I end up at my desk for a large part of the day. I decided it would be a good idea to get an adjustable sit/stand desk so I don’t spend that entire time sitting on my butt.

I did a bunch of research and ended up deciding on the Terra from NextDesks.com. I’ve come across some significant pros and cons regarding both the desk and the buying experience that weren’t mentioned in any of the (relatively few) reviews I found online, so I figured I’d share.

NextDesk Terra

 

First the good. It does what it’s supposed to do. I got the extended version (73″ across) and it’s large, but not at all unwieldy or unattractive. It raises and lowers quite quickly (apparently fastest available), and has three electronic presets. There are a number of color options available, and you can customize things like where you would like the controls to be, cable management options, keyboard tray or no, etc. There is a tiny shudder to the up and down motion, but I certainly wouldn’t worry about anything on the desk shifting. Essentially it does what it’s supposed to do well.

The cons primarily have to do with the buying and assembly experience, but there are a couple to do with the desk itself that I will mention first. The main one is that the height presets have to be held down while the desk is moving to the preset height. This is not the case with their main competitor, the GeekDesk Max. What is particularly irksome about it is that I explicitly asked their salesperson about this before purchasing the desk, because I know some other competitors do have preset buttons that need to be held down. He assured me that their presets do not need to be held, which is simply not true. I’m going to give the benefit of the doubt and say he just made an incorrect assumption, but since I’m making a rather expensive purchase based on his word, I expect better.

Now, the NextDesk does adjust height quite a bit quicker than the GeekDesk, but when you don’t have to hold the button you can spend the time moving your chair, standing up/sitting down, and getting back to what you were doing while the desk is doing its thing. Since that time is much shorter but essentially wasted with the Terra, it’s essentially a tie as far as which I’d prefer. However, the GeekDesk is almost half the price. (The presets are still useful though. It’s nice to set your heights then not thinking about it, rather than always fiddling with the height trying to find the level that feels right.)

The other thing I’ve found with the desk itself is that a small chunk (perhaps 1/4″ x 1/8″) of the surface coating has apparently flaked off at some point. The solid bamboo surface of this desk is something that’s supposed to set it apart from the competition, so you don’t like to see the finish disintegrating almost immediately. (I have no idea when it came off, but I certainly haven’t dropped anything on the desk or anything like that.)

One other thing to be aware of with the surface too is that its color appears significantly lighter in person than in the color swatches on their website (or even the physical ones they mail out). Obviously the website ones will depend on the calibration of your monitor, but even looking at the swatch they mailed with the desk, my “dark” surface looks much closer to the “medium” color swatch. (Although when I took a picture of the desk with the swatches on it, it looked closer to the “dark” swatch in the picture, which explains why they look that way!) So this certainly isn’t a knock or anything, just something to keep in mind – it will most likely appear lighter in person than you would expect from looking at the swatches.

Finally, the process of buying and then assembling the desk definitely had some stumbling points. The good first though – they shipped the desk very quickly, and it was packaged extremely well. Essentially no chance of damage during transport, and it shipped in two separate boxes (for the top and the frame), which made things a lot easier to manage since it’s obviously large and potentially unwieldy otherwise.

However, even aside from the misinformation about the presets, I found their support to be somewhat underwhelming. To start with, I asked a simple question about shipping costs (I’m in Canada, so it’s cross-border shipping). Several times my emails went days or even weeks with no answer, and eventually I just gave up on the email conversation and resorted to phone calls. (And then I was promised callbacks on specific days which never came, again requiring me to follow up later.) I also asked them where the controls are positioned because I planned to set my working area up to the right side of the desk and wanted to make sure they wouldn’t be in the way. They told me that I can have it wherever I want, but if it’s not specified they put it about 6 inches from the edge. That sounded perfect to me, so I didn’t specify a position when ordering. That was a mistake, as the desk arrived with the controls installed about 18″ from the edge instead (right where my leg wants to be). I drilled some new holes and moved it, but shouldn’t have had to.

Then of course there was the presets thing, and when I wrote to complain about that after receiving and assembling the desk, I never received a reply. (It’s been about 2 weeks now.) And finally, their instructions are extremely poor. Again this certainly isn’t a reason not to buy the desk, but if you do get one, definitely read through them a couple of times before getting started. There were a couple parts where I had to do some dis-assembly because something that had to be done in an earlier step wasn’t specified until later (or at all). For example, the two legs are interchangeable, but the cables that run out of them to the central control box cannot be adjusted once the legs are attached. So if you don’t pay attention to which way they’re going when you attach the legs, you’ll end up having to take the whole thing back apart to fix it. There were also a couple guess and check steps, like the initialization process: “Press the Down button once or twice, holding it down.” Uhh.. ok. (I pressed it twice, holding it down the second time. That didn’t work though, so I tried pressing it once, holding it down. Still nothing. So I unplugged the desk, plugged it back in, pressed and held once, and it worked. Instructions fail.)

So, if you’re looking for an adjustable-height sit/stand desk, this one IS worth a look. Just be extremely explicit about how you want things set up, and be aware that regardless of what they say, the presets do need to be held down. Personally I would prefer not to support a company that treats customers this way, but there is quite a lack of premium electronic height-adjustable desks out there at the moment. That said, the GeekDesk is certainly worth a look, as might be this NewHeights desk. The main reason I wrote that one off in my research initially was that the presets had to be held down…

Hope that helps! Are any of you using an adjustable desk already? (Or just a standing desk?) What do you figure a company’s response should be after misleading a customer like this (assuming it was a mistake)?

Update – July 2014:
I wanted to update this post, as I just had a really good experience with Priya at NextDesk customer support. I finally decided to see if I could get warranty support for the shuddering issue mentioned in the comments, since it seemed to be getting worse. The response from custom support email was instantaneous this time, and she took me through a set of calibration steps, then when that didn’t help, readily shipped me a new leg along with a return shipping label for the (presumably) defective one. It looks like the company may be maturing, which is great to see. Hopefully this new leg works out!

Update 2 – Septembar 2014
They seem to be trying hard, but so far the first replacement leg they sent me was also damaged, and the second was the wrong color. I also still haven’t received any shipping labels despite asking repeatedly, so I’ve got a growing collection of desk legs littering my office…

Code Compression

Was thinking recently about how any good code base tends to go through a continuous cycle of expansion and compression. (This thinking may have been inspired by the recent frantic development work on SearchTempest in the wake of craigslist blocking framing…) The ‘expansion’ part is the standard stuff. Building something new, adding features, even fixing bugs most of the time tends to involve writing more code, causing the code base to expand.

However, if you only expand and never compress, eventually you will inevitably end up with a giant pile of pasta.

It’s critical to occasionally go through your code and simplify. Trace through the logic and figure out how it can be improved. Look at places where procedural stuff could be made object-oriented. Even just strip out legacy crap that isn’t used anymore.

Of course, this can be difficult to justify. There are always higher priorities, and it’s tough to put a bunch of time into a project that, in the best case, has no immediate visible effect. (Especially if you happen to answer to a manager who hasn’t personally done much/any coding.) And that’s the best case. It’s ironic, but this process of cleaning up code can very well introduce new bugs. After all, it may involve making fairly significant structural changes to a code base that is by all appearances working just fine. (And those bugs tend to make people… irate. Don’t do something like this then go away for the weekend.)

So, why do we bother? Here’s a similar issue: why do we bother researching sources of renewable energy? It has always -so far- been cheaper to just stick with fossil fuels. That may well continue to be the case nearly until they run out, since the incremental cost of extracting a barrel of oil does not increase linearly with its scarcity. But if we wait until we run out of oil, or until we destroy our atmosphere burning coal, it’s too late.

Of course, the consequences of ugly code are rather less dramatic, but the analogy holds. If you wait long enough, eventually you will be forced to clean up your spaghetti code because you’ll get to the point where adding one more hack will break the camel’s back. You’ll have a feature to add, or a bug to fix, and it simply won’t be possible to shoehorn it into the existing morass. When that day comes, it is NOT a fun day to contemplate redesigning your whole code base. Especially if the bug you’re trying to fix happens to be a critical one.

On the other hand, when such a fateful day rolls around, a clean code base can be a truly beautiful thing. A little irony: the best thing about nice clean code is that it makes it really quick and easy to slap on an ugly hack. And when your site’s down and the hysterical emails are rolling in, that ugly hack that gets you running again can be a beautiful thing too.

Or more generally, by periodically cleaning up your code, you make your job a lot easier the rest of the time. Of course, you could try to just ‘do things right the first time’. But even without deadlines (which takes us into imaginary-land), it’s pretty difficult to always keep the entire big picture in mind while solving a specific problem. Of course you should still try to write nice, clean, extensible code whenever possible. Sometimes though, you still have to take a step back.

In the real world, the cleanup of a given module will likely be spurred by some other development, which is okay. For me at least, there’s no motivation to sit down for the express purpose of prettifying code. But when you’ve already dug into something a bit and you start to see avenues for improvement -and you’re not in an absolutely critical time crunch- go for it! (If you’re always in a critical time crunch, you’re doing it wrong. Or someone is.) It’s just like cleaning in real life actually. Ever go to pick up a dirty sock and end up doing all the laundry then cleaning the entire house? Do that with code! (If not, go vacuum something. I bet your partner/roommate/cat will appreciate it.)

Nice clean code is a beautiful thing, but it’s elusive; you can’t aim straight for it. What you can do is write fairly decent code, then occasionally compress it. Channel your inner Superman and squeeze that code coal into a precious diamond. (Ha! Tied those analogies together!)

If it ain’t broke, now’s a great time to fix it!

Browser Debugging

To the web developers: ever get an obscure error in one browser, so run the code in a different browser to see if it’s more helpful? Try it next time you get a head-scratcher. 🙂 Just now Firefox was giving me “NetworkError 596”. To which I said, “Oh, of course!” 🙄 So I fired up Chrome, and it gave a much more helpful console message, helping me realize that I was accidentally trying to make a cross-domain JSON request instead of JSONp. Oops.

Anyway, I’ve found this technique handy for a bunch of things. Sometimes Firebug’s message is better. Even IE sometimes gives the most verbose errors. Just depends on what obscure problem you’ve run into!