Pseudo-debugging Komodo JavaScript macros

Not exactly debugging, this technique lets you interrogate important variables in your Komodo JavaScript macros after they run (and presumably fail to do what you had expected).

So you've written a Komodo macro, and find it isn't behaving exactly
as you had expected. Maybe you're hitting an off-by-one error working
with the scimoz editor widget, or you're grappling with high-bit
characters. You'd like to try out your code interactively, but given
that it's running in a specialized environment, you can't just fire up
a JavaScript shell, right?

Here's a way around that problem. This article is going to combine
two articles I wrote recently. The first one showed how to access the
Mozilla clipboard object to access the system clipboard, in
http://blogs.activestate.com/ericp/2007/10/index.html. The second was
on how to use Ted Mielczarek's Extension Developer Extension to
examine Firefox event objects after they run
(see http://blogs.activestate.com/ericp/2008/01/exploring-firef.html).

It turns out that you can use the Extension Developer Extension with
Komodo as well. Todd Whiteman modified it so it
will run with Komodo -- get this version at --
About all he did was modify the install.rdf file so Komodo would load
it -- the advantages of building an app on Mozilla. He also added a
separate Python shell, but I'll focus on JavaScript in this article.

The technique I used in the second post I referenced was to save the
parts of your macro that you're interested in by copying them into
global variables. This is JavaScript: any variable you don't declare
in your macro will have global visibility (and yes, you can
communicate between different macros this way, but writing an
extension is more robust).

The leading plus sign marks the mod to the macro:

    var aclipboard = new Clipboard();
+    gclip = aclipboard;
    var url = aclipboard.get();

Now bring up the JavaScript Shell with the [Tools|Extension
Developer|JavaScript Shell] menu command.

Click on "enumerateWindows", and then click on

The next shot shows that the shell knows about the
Clipboard object we saved a reference to, and we can
use it.

Finally, if I switch back to the main editor and press
paste, the text I set the clipboard object to appears.
This is in your system clipboard, so it will work with
other applications as well.

Finally, let's use the shell to find that string and
remove it:

First get a handle to the editor object (scimoz) with this
code. Remember this widget has tab completion:

var sm = ko.views.manager.currentView.scimoz

Now you can get rid of the text by typing a simple


But that won't give us a chance to interactively experiment
with scimoz. Let's put the text back and try it the long
way. I'll put the code here as text so you can copy and paste it in
to the shell, but feel free to explore as you go. You can find the
scimoz API at <komodo install dir%gt;/lib/sdk/idl/ISciMoz.idl.


// Tell SciMoz the range of text to search in:
var needle = "// hello there"
var needleLen = needle.length
sm.targetStart = 0
sm.targetEnd = sm.length
var startPos = sm.searchInTarget(needleLen, needle)
var hitLine = sm.lineFromPosition(startPos);
var endPos = sm.positionFromLine(hitLine + 1);
sm.targetStart = startPos;
sm.targetEnd = endPos
sm.replaceTarget(0, "")

Sure, it's all a hack, but in the absense of a Komodo
debugger, it's a pretty useful hack.

Komodo profile structure


Can you explain what each item in the Komodo profile directory relate to?


Komodo profile location
To find the location of the Komodo profile directory (also known as the Komodo application data directory), please see this faq:

Directory structure

Stores all of the user preferences for Komodo. If you
want to reset your Komodo preferences, the easiest way
is to simply remove these two files and Komodo will
rebuild them with the default values on next startup.

A "pickled" cache copy of prefs.xml. It is re-generated if

This file stores the individual file preferences, set
through the "Edit->Current File Settings" menu. Things
like the current file position, indentation settings,
encoding, bookmarks, folding and eol settings.

This file stores view state, such as the MRU (most
recently used) ordering, recently opened files, tab
ordering position, etc...

Stores information about your Komodo toolbox. You can copy
these between profiles, or to another machine.

Contains JSON files for each of the tools in your toolbox.

All API catalogs (codeintel cix files) that are added through
Komodo's "Code Intelligence" preferences get copied to this

Stores all known project templates, available when
using the "File->New->New Project From Template" menu.

Stores all known file/language templates, available when
using the "File->New->New File..." menu.

This is where Komodo stores the sample files and projects.

This is where Komodo stores the user's custom keybindings
and color schemes.

The Komodo auto-save feature will save information
relating to Komodo's unsaved files in this directory.
If you open a file that has a matching backup in this
directory, Komodo will offer to restore the backup.

Code intelligence information. When Komodo scans any
source code file (PHP, JS, Python, etc...) or API
catalogs, it saves this processed information (containing
the function and variable information, calltips etc...)
to a file in this directory.

If you Komodo code intelligence is not working, it can
often be fixed by shutting down Komodo, removing this
directory and then starting Komodo again, which will
then cause a rescan/recreate of the necessary code files.

Details relating to the Mozilla base that Komodo is
using (extensions, dialog and window settings, remote
file encrypted password files, etc...).

Vancouver RubyCamp Writeup: Coding Like it's 1982

I gave a talk at Vancouver RubyCamp (January 2008) on dealing with large data sets in web applications. I called my talk "Coding like it's 1982", and the some of the slides paid tribute to the year, with photos of old computers, new wave bands, and real estate shots of some of my favorite "Vancouver Specials" (the more ordinary the better). A few people asked for me to post the slides. Instead they're getting the commentary and some of the code. For those who weren't there and are wondering what a "Vancouver Special" is and has to do with the '80s, google the term and take the first hit.

Lately I've been playing around in my spare time with the Google Maps API and Rails. It turns out the choice of framework barely matters, as you spend most of your time in JavaScript when you work with this API. Nevertheless I gave a talk on this at the recent Vancouver RubyCamp session. People asked if I'd put my slides up, but since I've rarely found a pack of slides in isolation very useful, and wanted to get the code out that I used, I figured I'd write a post instead.

Some background: It had been a few years since I had built a web application for fun, and had felt it was time again. Last time I did everything by hand with Perl and MySQL. This time I could use Rails (although after yesterday's sessions I'm considering Merb), and try to avoid writing so much of the client side by hand.

I was looking for an application that would have plenty of geographical data, and would also be somewhat interesting. I had thought of pulling garage sale listings out of craigslist and presenting them in a Google Maps mashup, but quickly discovered that my idea of fun isn't maintaining an ever-growing table of regexes that is intended to extract correct addresses from craigslist.org. Discovering that "2850 4 1/2 St Cloud" is really "2850 4 1/2 St. North, Saint Cloud, Minnesota 56303[1] isn't that straightforward. When the user misspells their hometown, or even leaves it out because it should be obvious from the context, that makes it harder. And no regex is going to determine an address out of "behind the Norht Village Mall", typo and all.

The final straw was a recent article in wired.com on a startup called Listpic, that was pulling photos of items for sale off craigslist, and displaying them in a friendlier manner. The article describes how one day its founder, Ryan Sit, saw an email arrive from Jim Buckmaster, CEO of craigslist, and was hoping it was an offer to purchase. Instead it was a cease-and-desist notice for violating craigslist's Terms of Service.

While I'm very interested in the issues involved on who actually owns the data people add to Web 2.0 services, that isn't going to give me opportunities to present the code I discussed at RubyCamp. The part of the article that was interesting was the discussion of a service called Oodle.com, which scrapes legal sources, organizes the information somewhat, and makes it available via XML/RPC. I had a look at the site, and saw that they had a sizable list of garage sales, even for December. Then I saw the foreclosure category, and given the current economic news, figured there'd be way more data there, and found my new app. Let's look at some of the code behind it.

First, I defined my database schema with a simple migration:

class CreateProperties < ActiveRecord::Migration
  def self.up
    create_table :properties do |t|
      t.column :latitude, :float, :null => false
      t.column :longitude, :float, :null => false
      t.column :oodle_id, :string, :limit => 16, :null => false  # buncha digits
      t.column :created_at, :datetime
      t.column :oodle_created_at, :datetime
    add_index :properties, :oodle_id

All I need for a GMap application are the latitude and longitude fields. I can use the Oodle ID on an entry to go back to Oodle for more information as I need it. I figured both timestamps would be useful -- the "oodle_created_at" to know how to resolve duplicate listings by taking the most recent, and my own timestamp to help remove aged entries from my database in a sweeper.

Next I wrote a tiny Ruby program to get the data and dump it in a text file:

#!/usr/bin/env ruby
require 'xmlrpc/client'

class OodleFC
  @@increment = 25;
  @@endpoint = 'http://api.oodle.com/api/'
  @@methodName = 'get'
  @@OODLE_KEY = 'precious'

  def initialize(region='usa')
    @params = {
      'partner_id' => @@OODLE_KEY,
      'category' => 'housing/sale/foreclosure',
      'region' => 'us'
      'from' => 0,
      'to' => @@increment,
    @service = XMLRPC::Client.new(@@endpoint)
    @rails_id = 1
  def process_next_set
    result = @service.call(@@methodName, @params)
    return false if !result['items'] || result['items'].size == 0
    @params['from'] += @@increment
    @params['to'] += @@increment
    result['items'].each do |item|
      # Process each set of items here -- it's a simple hash
      vals = [@rails_id]  # we need to provide our own ID #s
      @rails_id += 1
      vals << item['latitude']
      vals << item['longitude']
      vals << item['id']
      # Process other items...
      puts vals.join("|")
    end  # end result['items'].each
    return true
  end # end function

getter = OodleFC.new()
loop do
  break if !getter.process_next_set()

If you haven't used XML/RPC, or it's been six years or so, the Ruby library makes it very straightforward. It takes three lines -- one to import the library, one to create an instance of an XMLRPC::Client instance, and then repeated calls to the service with the parameter hash returns an array of items, each of which is a hash.

My plan here was to write out the data to a simple text file and read it into my MySQL database using the mysqlimport command. Because of this, I was bypassing ActiveRecord, and had to provide explicit values for the ~id~ field. Importing the data was easy:

db $ grab_data.rb > properties.out
db $ mysqlimport --delete --fields-terminated-by='|' --user=realtor --local \
foreclosures_development ./properties.out
foreclosures_development.properties: Records: 5936  Deleted: 0  Skipped: 0 Warnings: 22

I don't know what the warnings were, and couldn't find a way to coax them out of mysqlimport, but it looks like I got everything:

data $ wc -l properties.out
   5936 properties.out

So now there are three quick steps left to get going:

  1. Build a view to hold a map and display the data
  2. Build a controller to retrieve the data
  3. Write the JavaScript that displays the data

Let's look at a simple example of each one in turn. Here's app/views/fc/map.rhtml:

&lt;!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
&lt;html xmlns="http://www.w3.org/1999/xhtml">
  &lt;script type="javascript">
  var gkey = "&lt;%=GOOGLE_MAPS_KEY%>";
  &lt;script src="http://maps.google.com/maps?file=api&v=2&key=&lt;%=GOOGLE_MAPS_KEY%>" type="text/javascript">&lt;/script>
  &lt;%= javascript_include_tag 'application' %>
  &lt;%= stylesheet_link_tag 'style' %>
  &lt;title>Find A Foreclosure&lt;/title>
  &lt;div id="map" style="width: 500px; height: 300px">&lt;/div>

Here's the controller:

class FcController < ApplicationController

  def fc_in_bounds
    ne = params[:ne].split(',').collect{|e|e.to_f}  
    sw = params[:sw].split(',').collect{|e|e.to_f}    
    # if the NE longitude is less than the SW longitude,
    # it means we are split over the meridian.
    if ne[1] > sw[1]
      conditions = 'longitude > ? AND longitude < ? AND latitude <= ? AND latitude >= ?'
      conditions = '(longitude >= ? OR longitude < ?) AND latitude <= ? AND latitude >= ?'    
    fcs = Property.find(:all,
                         :conditions => [conditions, sw[1], ne[1], ne[0], sw[0]])
    # Now convert the list of foreclosures into a simple array of hashes
    fcs = fcs.map{|p|
        :latitude => p.latitude.to_f,
        :longitude => p.longitude.to_f,
        :oodle_id => p.oodle_id,
    render :text=>{:result => fcs}.to_json

Some of this code came from APress's "Beginning Google Maps Applications with Rails and Ajax" (http://www.amazon.com/Beginning-Google-Maps-Applications-Rails/dp/159059...), in particular the code above which shows you how to take data into account that straddles the international dateline.

I should say the book was useful, despite a title that was only a step or two above a "Dummies" title. The Rails part of the book often suggests it was translated from another framework (for example, there is no talk of testing), and there's a book on the same topic using PHP by the same group of authors, but I still found it useful. I found it a good use of my time to read through most of this book to get a sense of what I could do, and how to do it. Since then I've put the book aside and use the API reference.

The last piece is the JavaScript file, ~public/javascripts/application.js~:

var centerLatitude = 30.5;
var centerLongitude = -155.5;
var startZoom = 3;
var map;
var do_refresh = true;

function init() {
    if (GBrowserIsCompatible()) {
        map = new GMap2(document.getElementById("map"));
        map.setCenter(new GLatLng(centerLatitude, centerLongitude), startZoom);
        map.addControl(new GLargeMapControl());
        map.addControl(new GScaleControl());
        map.addControl(new GMapTypeControl());
        GEvent.addListener(map,'zoomend',function(oldLevel, newLevel) {
            // zooming requires this: remove the existing points
        GEvent.addListener(map,'moveend',function() {
        setTimeout(updateMarkers, 1000, true);

function createMarker(gpoint) {
     var marker = new GMarker(gpoint)
     GEvent.addListener(marker, 'click', function() {
          var markerHTML = (point.lat()
                            + ", "
                            + point.lng());
          do_refresh = false;
          setTimeout(function() {
            do_refresh = true;
          }, 5000);
    return marker;

function updateMarkers() {
    if (!do_refresh) return;
    //create the boundary for the data
    var bounds = map.bounds;
    var southWest = bounds.getSouthWest();
    var northEast = bounds.getNorthEast();
    var url = ('/fc/fc_in_bounds'
               + '?ne=' + northEast.toUrlValue()
               + '&sw=' + southWest.toUrlValue());

    //retrieve the points using Ajax
    var request = GXmlHttp.create();
    request.open('GET', url, true);
    request.onreadystatechange = function() {
         if (request.readyState == 4) {
            if (request.status != 200) {
              GLog.write("status: " + (request.status || "?"));
            } else {
                var data = request.responseText;
                var edata = eval("(" + data + ")");
                //remove the existing points
                var points = edata.result;
                //create each point from the list
                for (var i = 0; i < points.length; i++) {
                    var gp = new GLatLng(points[i].latitude, points[i].longitude);
                    var marker = createMarker(gp);

window.onload = init;
window.onunload = function() {
  // unloaded = true;

The above code should be familiar to anyone who's built a GMap API app. Since I don't have time to teach the basics here, you'd be advised to find the basics elsewhere (and you could do worse than with the Apress book I've mentioned), and you're welcome back afterwards.

That's the core of a Google Maps + Rails app. There are several performance problems that come up which I discussed at RubyCamp, and will cover them here.

By the way, the "do_refresh" variable solves a problem I noticed immediately, but isn't covered in the standard examples I found. Whenever I clicked on a marker, if the popup information window was initially off-screen, it would scroll the map to put it in position, which would trigger another moveend event, updating the macros. Seems like a bug to me, but until it's fixed, the workaround was easy:

The "do_refresh" variable is set to false whenever I show an info window, and I use a setTimeout to turn it back on after five seconds. We'll see a few more uses of that function coming up.

The next opportunity for performance improvement came up in maps like this one of the Los Angeles. I needed about 15 seconds to render 737 properties on a quad-core 2.4 GHz machine:

Imagine an investor trying to take advantage of the current climate. She wants to find a reasonable property in southern California, and is stuck in a SigAlert[2] nightmare on the Santa Ana. Good thing she's got her iPhone, but by the time this map finally renders, the great deal could be gone. The map shows some neighborhoods of Los Angeles that are rife with foreclosures, but we can't even see their names because the markers obscure them.

Performance enhancement #1: replace simple markers with clusters.

Now when the server sees there are more than n markers, it can cluster some of them, like so:

class FcController < ApplicationController

  include ApplicationHelper

  def fc_in_bounds
    ne = params[:ne].split(',').collect{|e|e.to_f}  
    sw = params[:sw].split(',').collect{|e|e.to_f}
    # if the NE longitude is less than the SW longitude,
    # it means we are split over the meridian.
    if ne[1] > sw[1]
      conditions = 'longitude > ? AND longitude < ? AND latitude <= ? AND latitude >= ?'
      conditions = '(longitude >= ? OR longitude < ?) AND latitude <= ? AND latitude >= ?'    
    fcs = Property.find(:all,
                         :conditions => [conditions, sw[1], ne[1], ne[0], sw[0]])
    max_markers = 100
    if fcs.size > max_markers
      fcs2 = cluster_points_by_distance(fcs, max_markers, ne, sw)
      fcs2 = fcs
    fcs = fcs.map{|p|
      { :title => p.oodle_title,
        :latitude => p.latitude.to_f,
        :longitude => p.longitude.to_f,
        :price => p.price,
        :zipcode => p.zipcode,
        :url => p.url,
        :oodle_id => p.oodle_id,
        :city => p.city || "",
        :state => p.state,
        :type => 'm'
    render :text=>{:result => fcs2}.to_json
  def cluster_points_by_distance(points, max_markers, ne, sw)
    points = cluster_by_distance(points, max_markers, ne, sw)
    # At this point we've got max_markers or less points to render.
    # Now, let's go through and determine which cells have multiple markers
    # (which needs to be rendered as a cluster), and which cells have a single marker
    results = []
    points.each do |p|
      if p.is_cluster?
        p = {
          :latitude => p.y,
          :longitude => p.x,
          :members => p.members.map{|m| m.point[:oodle_id]},
          :type => 'c'
        results << p
        results << p.point
    return results


The routine cluster_by_distance is implemented in code that I left in the app/helpers/application_helper.rb file. (It should be in a controller helper, but I left it that way.) It's posted standalone as a separate attachment ((here). The code points to a wikipedia article on the algorithm it implements.

The Google Maps book shows how to cluster by grid. I used their code as well, but since I used it as is from their book, I'd rather not repeat it here. You can download the sample source code at http://www.apress.com/book/downloadfile/3565, and find the code in the "chap_seven" directory (I have no idea why they didn't use directory names like "chap_07" that would sort reasonably well).

The only difference this time is that we're either returning a cluster that contains an array of IDs, or we're returning a simple property (type "m", for marker, which isn't the best name). Now we need to update the JavaScript code to handle this:

var centerLatitude = 30.5;
var centerLongitude = -155.5;
var startZoom = 3;
var map;

//create an icon for the clusters
var iconCluster = new GIcon();
iconCluster.image = "http://googlemapsbook.com/chapter7/icons/cluster.png";
iconCluster.shadow = "http://googlemapsbook.com/chapter7/icons/cluster_shadow.png";
iconCluster.iconSize = new GSize(26, 25);
iconCluster.shadowSize = new GSize(22, 20);
iconCluster.iconAnchor = new GPoint(13, 25);
iconCluster.infoWindowAnchor = new GPoint(13, 1);
iconCluster.infoShadowAnchor = new GPoint(26, 13);

//create an icon for the pins
var iconSingle = new GIcon();
iconSingle.image = "http://googlemapsbook.com/chapter7/icons/single.png";
iconSingle.shadow = "http://googlemapsbook.com/chapter7/icons/single_shadow.png";
iconSingle.iconSize = new GSize(12, 20);
iconSingle.shadowSize = new GSize(22, 20);
iconSingle.iconAnchor = new GPoint(6, 20);
iconSingle.infoWindowAnchor = new GPoint(6, 1);
iconSingle.infoShadowAnchor = new GPoint(13, 13);

// I bought the book, I don't feel guilty using their icons, but wouldn't
// rely on them for a live application.

var iconTypeFromCode = {c:iconCluster, m:iconSingle}

function createMarker(gpoint, appPoint) {
    var type = appPoint['type'];
   // type='m';
     var marker = new GMarker(gpoint, iconTypeFromCode[type] || iconSingle, true);
     GEvent.addListener(marker, 'click', function() {
        /// same code as above
        // ...

// Same code as above

    request.onreadystatechange = function() {
     // ...
     //create each point from the list
     for (var i = 0; i < points.length; i++) {
         var gp = new GLatLng(points[i].latitude, points[i].longitude);
         var marker = createMarker(gp, points[i]);

Now the map is clearer:

There are other things I'd like to do, like add numbers to the cluster icons, so I can see that the cluster in San Bernardino represents 100 properties, while the cluster near Murrieta in the south might represent only 30. I'd also use color to distinguish the expensive properties from the cheap. Those will have to wait for a later date though. There were still perf problems to deal with.

The first is that I noticed sometimes a response would arrive, and my JavaScript code would dutifully fill in the map. And as soon as it was done, a new response would arrive, so the code would erase all the markers and do it all over again. Here's the sequence of events that was taking place:

  • user nudges the map
  • JS sends an Ajax request A to the server
  • user nudges the map again
  • JS sends an Ajax request B to the server
  • the response for request A arrives, and JavaScript updates the map
  • the response for request B arrives, and JavaScript updates the map

I handled this situation by adding a timestamp on every request, and keeping track of what the latest timestamp was. I'll show the changes to the server first:

  def fc_in_bounds
    # ...
    render :text=>{:requestTag => params[:tag] || "", :result => fcs2}.to_json

Yeah, the server just echoes back the tag parameter. All the work is done in the client:

var request_tag = 0;
// ...

function updateMarkers() {
    if (!do_refresh) return;
    //create the boundary for the data
    var bounds = map.bounds;
    var southWest = bounds.getSouthWest();
    var northEast = bounds.getNorthEast();
    request_tag = (new Date()).valueOf()(); // Global
    var url = ('/fc/fc_in_bounds'
               + '?ne=' + northEast.toUrlValue()
               + '&sw=' + southWest.toUrlValue()
               + '&tag=' + request_tag);  #New

    //retrieve the points using Ajax
    var request = GXmlHttp.create();
    request.open('GET', url, true);
    request.onreadystatechange = function() {
         if (request.readyState == 4) {
            if (request.status != 200) {
              GLog.write("status: " + (request.status || "?"));
            } else {
                var data = request.responseText;
                var edata = eval("(" + data + ")");
                if (edata.requestTag != request_tag) {
                    GLog.write("ignoring old request")
                // ... the rest is the same

This addition made the client work more smoothly. But I didn't like the way that the server was still happily pulling items out of the database and partitioning them into clusters, only to have all that hard work blithely tossed away. I started wondering if I could avoid doing that as well.

Now the key event handlers on the client side are the zoomend and moveend events. Supposedly these fire once a user has hit the end of a series of zooms or moves. I thought maybe Google was being too optimistic on how much of a delay is needed to indicate when the user has reached the end of an operation, and thought maybe I could wait another 200 milliseconds or so. In this case rather than call the updateMarkers routine immediately, I would use setTimeout to simulate a queue of requests in the client side. I would use a separate timestamp on each request, so the client could decide when a request was the most recent, and only then fire it.

Once again, not many changes were needed to the code. And once again, I turned to ~setTimeout~:

var pendingRequest = null;
var checkDelay = 300; // wait checkDelay msec before hitting server.

// ...

function updateMarkers(do_now) {
    if (!do_refresh) return;
    if (typeof(do_now) == "undefined")  do_now = false;
    var currRequestTag = (new Date()).valueOf();
    // update the global
    pendingRequest = {tag: currRequestTag, bounds: map.getBounds()};
    if (do_now) {
    } else {
        GLog.write("New pending request: tag " + currRequestTag);
        setTimeout(finishUpdatingMarkers, checkDelay, currRequestTag);

function finishUpdatingMarkers(expectedTag) {
    if (pendingRequest.tag != expectedTag) {
        GLog.write("tossing tag " + expectedTag);
    //create the boundary for the data
    request_tag = expectedTag;  // this is the global!
    var bounds = pendingRequest.bounds;
    var southWest = bounds.getSouthWest();
    var northEast = bounds.getNorthEast();
    var url = ('/fc/fc_in_bounds'
               + '?ne=' + northEast.toUrlValue()
               + '&sw=' + southWest.toUrlValue()
               + '&tag=' + request_tag
               + '&cl=' + clusterStyle);
    // rest is the same
    // ...

This change split the ~updateMarkers~ routine into two -- the first part starts preparing the request, but only suggests that ~finishUpdatingMarkers~ carry it out. ~finishUpdatingMarkers~ acts as a filter, throwing out any partial requests that it knows will be out of date.

Since I haven't left the development phase of this project yet[4], I've always run both the client and the server on the same machine. I noticed a definite improvement after this step.

The database schema I have suggests some improvements. First, on every query I carry out a calculation on every property to see if it's in bounds. But I know some facts about the geography of the planet, and can partition the map into a grid, assign each grid an arbitrary number based on its latitude and longitude, and track which points fall into each of those grids at each zoom level. I can also realize that for some zoom levels like "1", which is of the whole planet, all my data will be hit, and can translate a query into a "select *".

I'd start with this migration.

class CreateTiles < ActiveRecord::Migration
  def self.up
    create_table :properties do |t|
      t.column :zoom, :integer
      t.column :lat_base, :integer
      t.column :lng_base, :integer
      t.column :property_id, :integer
    add_index :zoom, :lat_base, :lng_base

I don't have any code on this, so I'll leave it as an exercise. I should mention that all my calculations involving distance use the Euclidean formula we all learned in grade 7, and don't take into account that the Earth is a round solid, yet alone an ellipsoid. It works in this application, because all the data is plotted on a Mercator projection, where horizontal distances are exaggerated as they move towards either pole. If you want to show which points are closest more realistically, you'll need to use the correct formulas.

I also suggested having the client and the server both keep track of how long certain operations take. The client could keep records on how long it takes to render a certain number of points, and constantly suggest to the server the maximum number of points it's prepared to accept.

Some server operations take a long time as well. If you're sure there aren't any changes you can make, you could have the client preface one of these requests with a preliminary request on whether this is going to be an expensive operation or not. If the server replies (asynchronously, of course) that it will be, the client could break the request into smaller areas, dividing the map into four parts, for example. Then it would work on each part in a separate request.

If you're web application's taking too long, don't give up on it. As I've shown here, there are plenty of approaches you can take to find performance gains, keep your users happy, and, most important, keep your users.

[1] This is a real street, possibly not a real address, but Google Maps was able to resolve it to an actual location. It's been a while since I've been in St. Cloud, and even then I'm not sure if my booster seat was high enough to let me see out the window, but if there's a residence at that location, I hope I didn't compromise anyone's privacy. Please leave them alone if that's the case. They didn't ask to have their address published here.

[2] From "Grey in L.A." by Loudon Wainwright, on this album.

[3] On Amazon right now used copies of the PHP book are at $21.62, retail $23.09. The Rails book sells used for $18.65, against the conveniently same retail price of $23.09. Draw your own conclusions.

[4] And possibly never will, if the U.S. housing market turns around as fast as some commentators suggest it will,

Windows Environment for Remote Perl Debugging


How do I start a Perl debugging session in Komodo from the command-line?


This is a variant of remote debugging, which is covered in Komodo help. But
since it's a frequently-asked question, I thought I'd put a note here.

First, make sure Komodo is listening on a specific port using
Preferences|Debugger|Connection, and make sure the option for
"Komodo should listen for debugging connections on:" is
"a specific port:". We conventionally use port 9000, but you can
choose any free port.

Second, I set these environment variables in a Windows command shell:

set PERLDB_OPTS=RemotePort=localhost:9000
set kodir=c:/Program Files/ActiveState Komodo IDE 4.2/lib/support/dbgp/perllib
set PERL5DB=BEGIN { require q(%kodir%/perl5db.pl) }
set PERL5LIB=%kodir%

Use forward slashes in the paths on Windows. These values will be used
by Perl, and while it can use backslashes, you have to take the trouble
to make sure they're correctly escaped. Forward slashes raise no such problem.

Unlike Unix/Linux/OS X, don't quote arguments. If you do, the quotes end up
as part of the environment variable's value, and Perl doesn't
expect that.

Debugging a Perl program is then a simply launched like so:

perl -d foo.pl

Yes, it's the same way you start a command-line debugger session, but the
environment variables direct Perl to use Komodo's debugger.

Komodo 4.3 features

Product: Komodo

This is an overview of the major items included in the Komodo 4.3 release.

Unit testing integration

Komodo 4.3 offers an interface for unit testing with Perl, PHP, Ruby and Python. This first version supports common uses of Perl's "make test" command, PHPUnit, Ruby Rakefiles, and Python's unittest module.

Unit tests can be defined globally, or within a project. Unit test output is displayed in the Unit Test Results tab in the bottom pane, where errors can be clicked to jump to the relevant file and line number.

Troy and Eric have written up additional details to get you started with unit testing here:

Here is a screenshot of the new Test Results Tab:

Asynchronous SCC command handling

Komodo's SCC commands now work in a asynchronous manner, that is, the SCC commands will run in the background without locking up the Komodo UI. A "throbber" image is used to notify that there is a pending asynchronous operation running in the background. Once the SCC command is completed, an appropriate notification is sent to the UI to handle the results.

Here is a screenshot of the SCC History operation in action:

Find in Project and Replace in Files

The Komodo Find/Replace dialog has been completely redesigned, unifying the old Find/Replace and "Find in Files" dialog, to make it easier to use. As part of this re-write some new features have been added:

  • Find in Project: "Right-click > Find..." on any items in a project to search in them.
  • Replace in Files
  • Multi-line Find and Replace
  • Many fixes for regular expression searches using the '$' anchor.
  • A new Find sub-system that properly handles Unicode-encoded files and skipping binary files.

Other Komodo releases

Disable or modify the Komodo auto update feature


Some people have found the auto-update feature of Komodo to become annoying or do not wish to be notified about product updates. This FAQ is to show you how you can customize the time between checks or to disable the feature entirely.


Komodo uses the same underlying technology as Mozilla Firefox's auto-update, which means you can disable it or change the update check internal in the same way, which is by changing the "about:config" settings of the application.

To open the "about:config" settings in a Komodo tab, double-click the "View about:config" macro in the Toolbox under "Samples|Sample_Macros" or follow the instructions listed here:


Now, the keys that relate to the auto-update are:

  • app.update.enabled: If the application auto-update is enabled
  • app.update.interval: The value in seconds between application update checks
  • extensions.update.enabled: The auto update of extensions is enabled or not
  • extensions.update.interval: The value in seconds between extension update checks

Those settings once modified take affect immediately, so after making your changes just close the about:config tab and go about the rest of your business :)


Pretty XML Preview in Komodo

Product: Komodo | tags: XML

Birds XML PreviewBirds XML PreviewThere are so many features in Komodo it can often be interesting to figure out if you can make it do something you would like. I've been thinking a bit about how users can edit various XML dialects and get some kind of preview that reflects what actual output would look like. Using a bit of CSS, you can utilize the existing preview functionality in Komodo to provide better integration with some XML formats. This can be especially useful for Docbook, DITA, or any of the many other document XML formats.

While this is not a full wysiwyg environment, XML+CSS features can help you in editing your documentation files if they are in some XML format. Mozilla has had this capability for quite some time, thus it's been tucked away in Komodo as well. The trick is often just knowing something is there, then figuring out how to coax something useful out of Komodo.

With this article, I'm going to use the birds.xml file that is in the sample project provided with Komodo. If you open the "Start Page" (available via the Window menu if you have closed it), you can access the sample project that contains birds.xml.

First, lets make sure the preview is configured correctly for this article. Go to your preferences in Komodo, and select the "Web & Browser" panel. Under the "Preview in Browser" section, choose "Preview in Komodo tab, other tab group".

Go ahead and open birds.xml now. After it is open, do a browser preview of the document. Using the [CTRL+K, CTRL+V] key combination (hold CTRL down, press K then press V), or under the View menu, select "Preview in Browser".

You will be presented with a preview dialog that allows for some advanced configuration when previewing this file. We'll stick with the default, so just click on OK.

A new tab in a split view should appear, that shows birds.xml. It looks the same for the most part, except a comment at the top saying that the XML file does not have a stylesheet associated with it.

Now, in birds.xml, under the XML declaration, add the following:

<?xml-stylesheet href="birds.css" type="text/css"?>

The save the document. You will notice that the content in the preview tab has changed. It is now mostly unreadable. We'll fix that now.

In the project tab, right click on the sample project icon to get the context menu. Under "Add", choose "New File...". Choose CSS in the Templates column, then give the file the name birds.css. You should now have a blank document. Add the following to birds.css.


    border:1px solid #000;
    font: 12px verdana;

    display: block;
    border-top:1px solid #ccc;
    border-bottom:1px solid #ccc;

    display: block;
    border-top:1px solid #ccc;
    border-bottom:1px solid #ccc;

Now save that, and in the preview tab, click the refresh button. You will notice that the content changes into something that actually looks kind of nice!

Now, birds.xml uses a lot of attributes rather than text data, so this is not the most useful file for the example, it was just a file that is in all Komodo installations. If you're using DocBook, you can find some CSS files for DocBook at:


This can also be extended to any XML dialect, though I imagine it to be most useful for XML files that are targeted to some kind of documentation.

xmlpreview.png94.31 KB

Working with Rails 2.0 Scaffolds in Komodo

Product: Komodo | tags: rails rails 2.0 scaffold

Scaffolds in Rails 2.0 aren't the same as they were in 1.2, but Komodo isn't aware of that. Here's how to use Komodo's scaffold tool after the Rails upgrade.

I've been testing out the behavior of Komodo's Rails Project Template with Rails 2.0, and found that scaffolding breaks with this output:

 library --skip
      exists  app/models/
      exists  app/controllers/
      exists  app/helpers/
      exists  app/views/movies
      exists  app/views/layouts/
      exists  test/functional/
      exists  test/unit/
wrong number of arguments (1 for 2)

This happens even when I run the command in the command-line, with the usual model and controller name arguments.

It turns out that the version 1.* scaffolding has not only been deprecated, but the meaning of the command has changed with version 2.0. Instead of supplying the name of an existing model, and a new controller, you supply the name of a new model, and its fields, and it will generate a model, migration, and a rest-based controller.

The workaround is simple, and currently requires no change. To build a Rest-based resource for your Rails 2.0 project, click on the "scaffold" tool, give the name of the model as a capitalized, CamelCase singular name ("Movie", "Grocery", "Blog"), and for the "library" field, enter the model attributes instead ("name:string year:integer rating:float" for Movie, "name:string quantity:float price:integer" for Grocery, etc.).

A later version of the tool will check which version of Rails is being run, and adapt accordingly.

Multiple instances of Komodo


Can I run more than one instance of Komodo?


In Komodo 5, you can use "Window->New Window" to create another Komodo window.
In Komodo 6 and higher, you can use "File->New->New Window".

It is also possible to run a completely separate Komodo instance on Mac and Linux, though not on Windows. This does not work on Windows because the Windows Komodo build uses a different locking mechanism, which is a kernel lock specific to the application name and version.

By default Komodo will not allow you to have two instances of Komodo running at once, this is because the Komodo profile directory cannot support multiple instances. You can get around this profile directory limitation by using a separate profile directory for the second instance!

You can use a custom profile directory by simply setting the KOMODO_USERDATADIR environment variable. If you set a different value to this variable for every subsequent komodo session you want to run, then you'll be able to have multiple komodo sessions running.

Example to run two different Komodo sessions on Linux/Mac:

# export KOMODO_USERDATADIR=/home/toddw/.komodoide_session1
# komodo
# export KOMODO_USERDATADIR=/home/toddw/.komodoide_session2
# komodo

Note: You do not need to use a separate KOMODO_USERDATADIR setting for the default (first) instance, it will use the Komodo default setting if not supplied. The default location details are here:

Now, the problem with this is that there is no shared information between the two sessions. Your toolbox changes, session state, preferences, code intelligence, etc... will only ever be seen by the same/one Komodo instance, similar to how the Firefox bookmarks, session and preferences are separated between Firefox profiles.


Performance problems with remote User profiles


Komodo performs badly on my Laptop when I am at work on my company network, but works fine when I'm at home. How do I fix this?


This is possibly because your Company's network uses roaming or remote profiles, so when you are at work your User profile directory ( including Komodo's preferences ) are stored on a remote server. The best way to work around this is to instead store your preferences in a different directory on your machines local drive, outside of your user profile area.

1. create a new environment variable called 'KOMODO_USERDATADIR' and set it to a local path such as 'C:\KomodoPrefs\':

- right-click on My Computer and select Properties
- click on the Advanced tab and hit the Environment button towards the bottom
- below the System Variables list, click on the 'New' button
- for 'Variable name' fill in KOMODO_USERDATADIR, for 'Variable value' fill in your local path.

2. Start Komodo. This will create a new set of preferences in the local directory, and hopefully improve performance.