NodeJS flush fs.writeFileStream()

Initial problem: Simple. Given a folder of .JSON files, extract attributes and write them to another file. Instead of relying on my trusty Groovy, I took this opportunity to implement it in NodeJS.

First attempt was straightforward. Read the folder, for each file, parse JSON, open new file and write it out.

var folder = '/temp/json/';
for (var file of fs.readdirSync(folder)) {
  var json = JSON.parse(fs.readFileSync(path.join(folder, file)));
  var out = fs.createWriteStream(path.join(folder, file.slice(0, -5) + '.csv'));
  for (var item of json.item) {
    out.write(util.format('%s,%s\n', item.id, item.title));
  }
  out.end();
}

Note: Exception handling, file type checking, etc were removed to retain conciseness and focus on the relevant aspects.

Tested this on folder with 1 file first. Good, output is correct. Tested on 10 files. Same correct output. Now for the first batch of 1000.

Took some time to run, but only 0-byte output files were created. Rate of new file creation also slowed down over time. More tests with less files show that output were all written only after the program ends. Aha! Buffered writes.

That’s still fine, since I get the correct results at the end of the batch. But I get this error before I reach the end, which discards all my buffered writes…

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed – process out of memory

Not ready to give up (nor just repeat runs with smaller batches), I turned to Google.

This guy has the same problem: no writing before program ends.
http://grokbase.com/t/gg/nodejs/125e84345w/how-to-flush-a-writestream-before-the-program-is-done-executing

Event-Driven Model… Awkward for this case, but I refactored the script to trigger process.nextTick().

var folder = '/temp/json/';
for (var file of fs.readdirSync(folder)) {
  process.nextTick(function(file) {
    var json = JSON.parse(fs.readFileSync(path.join(folder, file)));
    var out = fs.createWriteStream(path.join(folder, file.slice(0, -5) + '.csv'));
    for (var item of json.item) {
      out.write(util.format('%s,%s\n', item.id, item.title));
    }
    out.end();
  }(file));
}

Nope, didn’t help. Is it because all calls were scheduled on the same “next tick”?
Let’s push each file to the subsequent tick.

var folder = '/temp/json/';
var files = fs.readdirSync(folder);

function json2csv(index) {
  if (index >= files.length) return;
  var file = files[index];

  var json = JSON.parse(fs.readFileSync(path.join(folder, file)));
  var out = fs.createWriteStream(path.join(folder, file.slice(0, -5) + '.csv'));
  for (var item of json.item) {
    out.write(util.format('%s,%s\n', item.id, item.title));
  }
  out.end();
  
  process.nextTick(json2csv.bind(null, index+1));
}

process.nextTick(json2csv.bind(null, 0));

Still no. Time to try the 2nd suggestion. out.write() did return false after some writes.

var folder = '/temp/json/';

function json2csv(files, start) {
  for (var i=start; i<files.length; i++) {
    var file = files[i];
    var json = JSON.parse(fs.readFileSync(path.join(folder, file)));
    var out = fs.createWriteStream(path.join(folder, file.slice(0, -5) + '.csv'));
    var written = true;
    for (var item of json.item) {
      written &= out.write(util.format('%s,%s\n', item.id, item.title));
    }
    if (written) {
      out.end();
    } else {
      out.once('drain', function() {
        out.end();
        json2csv(files, i+1);
      });
      return;
    }
  }
}

json2csv(fs.readdirSync(folder), 0);

And… it works! So much for starting with a 10-line script.

It may not be the best tool for the job (subjective), but sometimes it’s more efficient to work with a tool you already know; imagine a NodeJS developer without Groovy knowledge would find this easier to write in Node than in Groovy/Bash/Perl/Python.

Disclaimer: I decided to continue pushing writes even when out.write() returns false to simplify the implementation, because I know each input file was only around 1MB, which is safe to buffer. If the input is unknown, writes within the same file may need to be deferred until drained (maybe by transforming the items into an input stream)

Comments

HACK: Change issue creator in Bitnami Redmine

I own the administrator account of a Bitnami Redmine that I installed, but I usually work using a regular user account (Unix rule of not using root). Unfortunately I made the unforgivable mistake of creating a regular issue using the Admin account. For “correctness” sake I tried, and searched if I could modify the creator… (talk about non-repudiation…)

Nope, no default method, or requires a plugin. I don’t intend to do this regularly, so I don’t really need a plugin. I decided to mess with the database directly and see if it was easy to understand the schema. Turns out it was too straightforward.

Notes:

  1. The mysql root password is the same password as the Redmine admin.
  2. I am using a Bitnami Redmine 3.1.0-0 instance, you may need to use “SHOW DATABASES;” to figure out which database.
  3. In the process I used “SHOW TABLES;” and “DESC issues;” to probe the schema. I am just showing the final necessary commands to run.
  4. You can get the issue ID by looking at the URL when the issue is displayed in your browser.
  5. You can mouseover the desired user in the browser to peek at the user’s ID to be used as the author_id.
> ./mysql -u root -p
Enter password: 
mysql> USE bitnami_redmine;
Database changed

mysql> UPDATE issues SET author_id=3 WHERE id=59;

Refresh your browser.

Comments

R: Find rows that contains vector

Imagine stations on several train lines. Given a station pair, find the lines that allow travel between these stations (no transfers!)


> # install.packages("qpcR")
> library(qpcR)
> stations = qpcR:::cbind.na(EWL=c("Pasir Ris", "Tampines", "Simei", "Tanah Merah", "Bedok", "Kembangan", "Eunos", "Paya Lebar", "Aljunied", "Kallang", "Lavender", "Bugis", "City Hall", "Raffles Place", "Tanjong Pagar", "Outram Park", "Tiong Bahru", "Redhill", "Queenstown", "Commonwealth", "Buona Vista", "Dover", "Clementi", "Jurong East", "Chinese Garden", "Lakeside", "Boon Lay", "Pioneer", "Joo Koon"),
+                 NSL=c("Jurong East", "Bukit Batok", "Bukit Gombak", "Choa Chu Kang", "Yew Tee", "Kranji", "Marsiling", "Woodlands", "Admiralty", "Sembawang", "Canberra", "Yishun", "Khatib", "Yio Chu Kang", "Ang Mo Kio", "Bishan", "Braddell", "Toa Payoh", "Novena", "Newton", "Orchard", "Somerset", "Dhoby Ghaut", "City Hall", "Raffles Place", "Marina Bay", "Marina South Pier"), 
+                 NEL=c("HarbourFront", "Outram Park", "Chinatown", "Clarke Quay", "Dhoby Ghaut", "Little India", "Farrer Park", "Boon Keng", "Potong Pasir", "Woodleigh", "Serangoon", "Kovan", "Hougang", "Buangkok", "Sengkang", "Punggol"),
+                 CCL=c("Dhoby Ghaut", "Bras Basah", "Esplanade", "Promenade", "Nicoll Highway", "Stadium", "Mountbatten", "Dakota", "Paya Lebar", "MacPherson", "Tai Seng", "Bartley", "Serangoon", "Lorong Chuan", "Bishan", "Marymount", "Caldecott", "Bukit Brown", "Botanic Gardens", "Farrer Road", "Holland Village", "Buona Vista", "one-north", "Kent Ridge", "Haw Par Villa", "Pasir Panjang", "Labrador Park", "Telok Blangah", "HarbourFront"),
+                 DTL=c("Bukit Panjang", "Cashew", "Hillview", "Beauty World", "King Albert Park", "Sixth Avenue", "Tan Kah Kee", "Botanic Gardens", "Stevens", "Newton", "Little India", "Rochor", "Bugis", "Promenade", "Bayfront", "Downtown", "Telok Ayer", "Chinatown"))

> apply(stations, 2, function(route) { all(c("Dhoby Ghaut", "Bishan") %in% route) })
  EWL   NSL   NEL   CCL   DTL 
FALSE  TRUE FALSE  TRUE FALSE 

Comments

Excel VLOOKUP in R via Rolling Join

Imagine a car park with different parking costs for parking per hour or part thereof. Assume also there is no pattern, thus a mapping table of hour -> cost:

hr cost
0 0.30
1 0.60
2 0.80
3 1.20
4 1.30
5+ 1.60

Parking beyond 5 hours will max your charges at $1.60.

In Excel there is the VLOOKUP function, with Range_lookup=TRUE to find the nearest match.

In R we can do a rolling join on a data table. Without the roll, it works like Range_lookup=FALSE; it finds an exact match.

> # install.packages("data.table")
> library(data.table)
> fees <- data.table(hr=c(0, 1, 2, 3, 4, 5), 
                   cost=c(0.3, 0.6, 0.8, 1.2, 1.3, 1.6))
> fees
   hr cost
1:  0  0.3
2:  1  0.6
3:  2  0.8
4:  3  1.2
5:  4  1.3
6:  5  1.6
> query <- data.table(parked=c(0.4, 1.5, 2, 2.14, 4.5, 10))
> setkey(fees, hr)
> fees[query]
      hr cost
1:  0.40   NA
2:  1.50   NA
3:  2.00  0.8
4:  2.14   NA
5:  4.50   NA
6: 10.00   NA
> fees[query, roll=TRUE]
      hr cost
1:  0.40  0.3
2:  1.50  0.6
3:  2.00  0.8
4:  2.14  0.8
5:  6.00  1.6
6: 10.00  1.6

Comments

PostgreSQL 9.4 on CentOS 6.6

As usual there are many guides out there on installing something on some OS, but with Linux I never got a guide that could bring me straight through (every environment, every version requires different setup). So here’s my very own steps for installing PostgresSQL 9.4 on CentOS 6.6. (also for my future self-reference)

Prerequisites: Ensure DNS and HTTP(S) working for yum, otherwise you may encounter Host not found, etc. (This is out of scope as it may be nameservers or firewall settings)

1. Configure yum repo
Ref: http://tecadmin.net/install-postgresql-on-centos-rhel-and-fedora/

sudo rpm -Uvh http://yum.postgresql.org/9.4/redhat/rhel-6-x86_64/pgdg-redhat94-9.4-1.noarch.rpm
sudo yum install postgresql94-server postgresql94 postgresql94-contrib

2. Initialize the database
Ref: https://wiki.postgresql.org/wiki/YUM_Installation

sudo service postgresql-9.4 initdb
sudo service postgresql-9.4 start

3. Connect and create the database
Ref: http://serverfault.com/questions/110154/whats-the-default-superuser-username-password-for-postgres-after-a-new-install
After default installation, only the “postgres” user can access the database, but it has no password.
Create the database and grant a user access, which you will use to manage the database subsequently (don’t use “postgres” user)

sudo -u postgres psql postgres
    CREATE DATABASE devdb;
    CREATE USER devuser WITH PASSWORD 'devpass';
    GRANT ALL ON DATABASE devdb TO devuser;

4. Allow remote connections
Ref: http://www.thegeekstuff.com/2014/02/enable-remote-postgresql-connection/
pg_hba.conf allows any IP to connect (0.0.0.0) and authenticate using md5. You can also restrict this to your webserver IP only.
postgresql.conf will let the server listen on all attached IPs.

sudo vi /var/lib/pgsql/9.4/data/pg_hba.conf
	host    all     all     0.0.0.0/0       md5

sudo vi /var/lib/pgsql/9.4/data/postgresql.conf
	listen_addresses = '*'

sudo service postgresql-9.4 restart

5. Move the data to another disk
Ref: http://stackoverflow.com/questions/28414558/moving-postgresql-main-folder-out-of-var-lib-postgresql-9-4
My main disk was a default 10GB, enough for OS and programs but not for the database data. I have a spanking new 300GB disk attached, and I want to move the table space to the new disk.
There were several methods involving specifying the data directory but I found it was easier to just link it.

sudo service postgresql-9.4 stop
sudo mv /var/lib/pgsql/9.4/data /media/xvdb1/pgsql/9.4/
sudo ln -s /media/xvdb1/pgsql/9.4/data/ /var/lib/pgsql/9.4/data
sudo chown postgres:postgres /var/lib/pgsql/9.4/data
sudo service postgresql-9.4 start

6. Autostart
Finally, configure PostgreSQL to start itself on boot.

sudo chkconfig postgresql-9.4 on

Ready-to-use PostgreSQL.

Comments