MongoDB Geospatial query always returns 100 results

Quick note to anyone finding that Mongo always returns 100 results with a geospatial query, for example, running:

> db.points.find({ "coordinates": { $near: { $geometry: { type: "Point", coordinates: [ -3.17465, 59.9649 ] }, $maxDistance: 10 }}} ).count();
100

The reason for this is that using $near implicitly sets the limit to 100. Explicitly setting no limit (i.e. .limit(0)) doesn’t work and so the best solution I have at the moment is setting it really high:

> db.points.find({ "coordinates": { $near: { $geometry: { type: "Point", coordinates: [ -3.17465, 59.9649 ] }, $maxDistance: 10 }}} ).limit(999999).count();
2489

Removing dots (periods) from CSV field names for importing into MongoDB

One of the two restrictions on legal MongoDB key names is that it cannot include dots, however this is a common character used in CSV field names and unfortunately mongoimport will happily insert fields with dots leaving the resultant data unusable.

Here is a simple sed command to switch the dots to underscores, in the field names ONLY:

sed -i '1s/\./_/g' yourFile.csv

Simple, but hopefully it will save someone some time in the future (myself included)

JavaScript: The World’s Most Misunderstood Programming Language

From Douglas Crockford http://www.crockford.com/javascript/javascript.html

How to fix adb “no permissions” on Fedora

This post will string together the various hoops I had to jump through to fix the “no permissions” error when running adb devices

Step 1

From the trusty Fedora Forums: http://forums.fedoraforum.org/showpost.php?p=1343484&postcount=5

$ sudo vim /lib/udev/rules.d/51-android.rules

Add the following (from http://ptspts.blogspot.co.uk/2011/10/how-to-fix-adb-no-permissions-error-on.html:

SUBSYSTEM==”usb”, ATTRS{idVendor}==”0bb4″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”0e79″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”0502″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”0b05″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”413c”, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”0489″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”091e”, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”18d1″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”0bb4″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”12d1″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”24e3″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”2116″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”0482″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”17ef”, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”1004″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”22b8″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”0409″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”2080″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”0955″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”2257″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”10a9″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”1d4d”, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”0471″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”04da”, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”05c6″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”1f53″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”04e8″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”04dd”, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”0fce”, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”0930″, MODE=”0666″
SUBSYSTEM==”usb”, ATTRS{idVendor}==”19d2″, MODE=”0666″

And Save

Step 2

$ sudo service udev restart
$ sudo killall adb

Step 3

Disconnect the phone, reconnect, BOOM!

There were a few other things mixed in but I think this is what actually cracked it, let me know how it goes!

Compiling Node.js PostgreSQL client (node-postgres) on Fedora

Stumbled on an error when installing node-postgres via npm, like so:

npm install pg

The error (the google bait):

...
... fatal error: libpq-fe.h: No such file or directory
...

The fix (as you may expect):

yum install postgresql-devel

What’s stopping you?

What would you do today if you knew you couldn’t fail?

Anon

You make your own success, is there really anything stopping you?

SSH Shortcut

Connecting to more than one server on a regular basis can become a pain so I wrote a small short cut to save time.

Remembering users, host names and ports is a pain, especially when they’re all different and very long. So after stumbling on alias.sh on Hacker News I was inspired to expand my use of bash shortcuts.

So go forth, copy this short cut, put it in your ~/.bashrc and then add servers as you go, I currently have 9 in mine, all now accessible by simple typing:
sshs vps

Bonus points if you then sync it across your computers (I recommend Spider Oak, but please use my referral link so we both get an extra 1GB for free)!

Twitcher – Twitter Account Switcher

Short

I built a chrome extension that let’s you switch between twitter accounts: https://chrome.google.com/webstore/detail/twitcher-twitter-account/gmngpagflejjoblmmamaonmnkghjmebh

Long

I manage a few different twitter accounts and one of my biggest gripes is having to log in/out every time I want to switch account.

The mobile client handles this brilliantly and they have a “switch account” feature baked in but if your in your browser, as I usually am, your out of luck.

This is why I built the Twitcher chrome extension:
https://chrome.google.com/webstore/detail/twitcher-twitter-account/gmngpagflejjoblmmamaonmnkghjmebh

I should say here that one viable option would be to use Tweet Deck, and this is good for heavy profile management, but personally I just prefer using the twitter.com UI, especially for my personal account.

It’s really easy to use, it will save any account you sign into and allows you to instantly switch back to a previous session without signing in or out.

Would love to know if this was useful to anyone, I plan on releasing the code soon (although you can just inspect it anyway!)

Find your D-U-N-S number if you’re in the UK

In a process that only Apple could engineer, if you wish to join the iOS developer program with a “company” account you must provide a D-U-N-S number.

It isn’t blindingly obvious what this number is or how to acquire it, a bit of googling will lead you to the D&B D-U-N-S enquiry form, it says it takes 10 days, but 3 days after submitting it we hadn’t heard anything and with deadlines looming were getting a little twitchy.

After multiple calls and dead ends trying to speak to someone at D&B I was finally shown that if you’re a limited company, then apparently you already have one! The tool to find it is hidden away in the FAQ’s, but here’s a direct link:

http://www.dnb.co.uk/myduns

Gzip support in Chrome

Website optimisation 101 teaches us to gzip what we can to save on bandwidth and transfer time.

At The Student Cloud we encrypt and compress all data before storage, so with this in mind we thought it could be cool if we could skip inflating the data when we were delivering it back to the user. Sending it gzip’ed seemed to be a win for the user (speed/bandwidth) and win for us (CPU), hurrah!

I delicately disabled (commented out) data inflation, and eagerly pointed my browser at serious photo I had uploaded. When it didn’t work I assumed it was an incompatability due to the way I was compressing the data in PHP, after a bit of googling it seemed like what I was doing should be OK and I finally stumbled on a blog post about browser gzip support suggesting that I should try Opera, I dutifully fired it up and sure enough there was the image in all it’s glory! I tried out a few other browsers to find it worked in IE9/8/7/6 but not in Firefox, how could this be!?

Turns out the reason is because Chrome (and Firefox) just don’t accept gzip’ed images! I can’t find any particular articulation as to the reason for this on the developers sites but I assume it is due to the following rationale as Google explains (https://developers.google.com/speed/docs/best-practices/payload):

Don’t use gzip for image or other binary files. Image file formats supported by the web, as well as videos, PDFs and other binary formats, are already compressed; using gzip on them won’t provide any additional benefit, and can actually make them larger. To compress images, see Optimize images.

So there you have it, IE6 supports a feature that Chrome doesn’t! (EDIT: Although not what I initially thought, Chrome does support gzip’ed images if the output has the right file headers (and http, but that wasn’t the problem), we can note that IE supports gzip’ed content without the correct file headers, more below)

2 points:

  1. We are going to be more selective about what we compress, we have learnt that it simply isn’t worth it for a fair proportion of our data.
  2. We still don’t quite understand why nobody sends gzip’ed images and Chrome fails to inflate an image that has been gzip’ed even when it’s sending headers like this:Accept:*/*
    Accept-Encoding:gzip,deflate,sdch
    We must be missing something! 

Correction:
plifpointed out that it wasn’t working because we were using the zlib.deflate stream filter which does not correctly build the gzip headers, we were doing something like this:

<?php
header('Content-Encoding: gzip');
header('Content-Type: image/jpeg');
$fin = fopen('test.jpg', "rb");
$fout = fopen('php://output', "w");
$gzipFilter = stream_filter_append($fout, 'zlib.deflate', STREAM_FILTER_WRITE, array('level' => 6, 'window' => 15, 'memory' => 9));
stream_copy_to_stream($fin, $fout);
stream_filter_remove($gzipFilter);
fclose($fin);
fclose($fout);

Whereas if we did this:

<?php
header('Content-Encoding: gzip');
header('Content-Type: image/jpeg');
$data = file_get_contents('test.jpg');
$gzdata = gzencode($data, 9);
$fout = fopen('php://output', "w");
fwrite($fout, $gzdata);
fclose($fout);

The headers are formed correctly and it does work in Chrome!

Correction 2:

colanderman, points out that the first method does work but with: Content-Encoding: inflate, thanks