I try not to pollute my machine with libraries and apps due to my experiments, so I cannot do without Homebrew and Docker. Here’s how I quickly set up an isolated environment for some dirty “browser automation” I needed 😉
Good quick start guide. As there is no selenium for M1/arm yet, it errors. Adding the platform (and an optional name) works wonders.
projects % docker run -d -p 4444:4444 -p 7900:7900 --shm-size="2g" selenium/standalone-firefox:4.1.1-20211217
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
projects % docker run -d -p 4444:4444 -p 7900:7900 --shm-size="2g" --platform linux/amd64 --name selfox selenium/standalone-firefox:4.1.1-20211217
ce4ee1ba7b31c59b7c9964abd1c219b87a4ab49098fb4436dd0a1ba797a6896b
I finally parted with my 7-year old MacBook Air for a new WFH setup consisting of
Macbook Pro M1
2 x 24″ wide monitors
USB-C hub
I added a 10-in-1 USB-C hub as my “dock” to expand my ports. Essentially I require a USB-A for my wireless keyboard set (7 year old also), RJ45 for occasional networking and extra display ports for the dual extended monitors.
My original intention is to just have “one port” connected to my MBP as am I used to the “docking” model, so all the other connectors are served through the hub including USB-C PD and the dual displays. However the hub does not support DisplayLink, so it can only mirror the output and I still have to have an extra HDMI cable connected directly to my MBP. Boo…
UnnaturalScrollWheels
The first obvious discomfort is that Apple decided that it is natural for the mouse wheel should roll in the same direction as the trackpad. Not that it’s wrong, but I usually use the trackpad with Mac but the keyboard and mouse for Windows so my scroll direction is messed up. System Preferences allows me to reverse the scroll, but any setting affects both the trackpad and mouse wheel simultaneously.
Luckily I’m usually not the only one with these problems, and UnnaturalScrollWheels solves this gracefully.
brew install --cask unnaturalscrollwheels
Karabiner Elements
Next are the modifier keys. In Mac-world we have Control, Option and Command, whereas Windows have Ctrl, WinKey and Alt. Even then the Control works in different ways where Copy is Cmd+C vs Ctrl-C. For me because I’ve been using both in home and work I’m able to “code-switch” on both keyboards instead of mapping on to the other.
By setting my Citrix Viewer preferences I was able to get close to the Windows keyboard mapping, but Alt is still on WinKey and the left WinKey is forced to the right WinKey. System Preferences allows me to remap modifiers by input device but I still could not use my left WinKey for my commonly-used keystrokes like Win-E, Win-R with one hand.
With Karabiner-Elements I was able to remap the right WinKey to the left and push the Alt back to where it was. Finally I can do Ctrl-Alt-Del with peace.
brew install --cask karabiner-elements
I also noticed that Karabiner can remap mouse clicks, including the middle click on the scroll-wheel. However it did not detect my wheel-scroll, so it was not able to replace UnnaturalScrollWheels.
Bonus Hint: Ctrl-Space for autocomplete may be by default mapped to Spotlight Search or Input Source change (I use multiple input languages) so they may need to be disabled/remapped in System Preferences > Keyboard > Shortcuts.
Bonus Problem: Alt-Tab in Citrix activates app switcher in Mac instead of inside remote Windows. I can still switch with Win-Tab, which is not as bad as Right-WinKey. Karabiner has “complex modifications” that can import rules from the web that seems to support this but I suspect I’ll still need to edit the rules to target an input. Another adventure for another time.
DisplayPlacer
My new monitors were placed above my MBP, giving me a triple-screen (the MBP screen was too big to waste). I decided to have Citrix Viewer span across the dual monitor giving me a dual screen, while my Mac activities remain on the MBP screen.
Several issues:
Stretching Citrix Viewer across two displays
In MacOS mission control, “Displays have separate spaces”. With that enabled, a window can only appear in one of the screens. Sure, Citrix Viewer has an View option to “Use all Displays in Full Screen” but that replaces all 3 displays I have, instead of 2.
Multiple extended displays with same model
Every time I come back to my workstation (either after a break or the next morning), the monitors and MBP are in sleep mode (which is good). But when I log in again, the MacOS reconnects to the displays and often gets them mixed up. I have to go back to System Preferences to swap the two monitor position each time.
There seem to be no way to consistently force one to be recognized as either one. I tried swapping the HDMI connection or turning them on in sequence, it turns out wrong most of the time.
After a few days I got really sick of it and decided to fix it. Luckily for displayplacer, I was able to use a command to restore the display layout. On top of that I was able to align my MBP to the real center as in Display Preferences my dragging was not so accurate. (This is relevant to #3)
After waking the MBP and monitors, not only the display layouts were gone, the windows that were on the extend displays got thrown back to the primary monitor as well. I have to re-position and resize the window each time. So, what if I can script the window position and size back, as well as run the displayplacer together with a global shortcut key? Mac Automator to the rescue!
Automating displayplacer was straightforward, I used “Run Shell Script” task and pasted the output from displayplacer list , the only caveat was to specify the full path as I have brew-ed it. I try to brew where available so I can manage versions, and uninstall it without wondering if I can drag applications to the Bin or I need an uninstaller.
The window stuff was trickier. I felt AppleScript could do it, but the basic “tell application to set bounds” didn’t work. Ultimately what worked for me was to go through System Events, and have commands to set the position and size separately. Also I discovered “window 1” was the little floating menu at the top so my intended target is “window 2”.
on run {input, parameters}
tell application "System Events" to tell application process "Citrix Viewer"
set position of window 2 to {-960, -1080}
set size of window 2 to {3840, 1080}
end tell
return input
end run
Still that wasn’t enough. When I try to set a global shortcut on it, it required permission on whatever app was in the foreground. It does not make sense nor practical to grant every app this access, so an extra workaround to extract the script was required.
do shell script "osascript -e 'tell application \"SetCitrixViewerBounds\" to activate'"
If, for whatever reason, you want to use Chrome to inspect a site, but the site tries to be smart and deactivates the functionality when it detects you activate DevTools, you can try to use “Remote debugger” to try to bypass it.
This is achieved by starting Chrome with a debugging port and connecting to it from another Chrome instance.
Step 1: Launch your 1st Chrome instance. This will be your debugger. (This step is needed if not attempting to launch the 2nd Chrome will collapse it to the first one.)
Step 2: Launch 2nd Chrome with debugging port (below for MacOS)
Initial problem: Simple. Given a folder of .JSON files, extract attributes and write them to another file. Instead of relying on my trusty Groovy, I took this opportunity to implement it in NodeJS.
First attempt was straightforward. Read the folder, for each file, parse JSON, open new file and write it out.
var folder = '/temp/json/';
for (var file of fs.readdirSync(folder)) {
var json = JSON.parse(fs.readFileSync(path.join(folder, file)));
var out = fs.createWriteStream(path.join(folder, file.slice(0, -5) + '.csv'));
for (var item of json.item) {
out.write(util.format('%s,%s\n', item.id, item.title));
}
out.end();
}
Note: Exception handling, file type checking, etc were removed to retain conciseness and focus on the relevant aspects.
Tested this on folder with 1 file first. Good, output is correct. Tested on 10 files. Same correct output. Now for the first batch of 1000.
Took some time to run, but only 0-byte output files were created. Rate of new file creation also slowed down over time. More tests with less files show that output were all written only after the program ends. Aha! Buffered writes.
That’s still fine, since I get the correct results at the end of the batch. But I get this error before I reach the end, which discards all my buffered writes…
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed – process out of memory
Not ready to give up (nor just repeat runs with smaller batches), I turned to Google.
Event-Driven Model… Awkward for this case, but I refactored the script to trigger process.nextTick().
var folder = '/temp/json/';
for (var file of fs.readdirSync(folder)) {
process.nextTick(function(file) {
var json = JSON.parse(fs.readFileSync(path.join(folder, file)));
var out = fs.createWriteStream(path.join(folder, file.slice(0, -5) + '.csv'));
for (var item of json.item) {
out.write(util.format('%s,%s\n', item.id, item.title));
}
out.end();
}(file));
}
Nope, didn’t help. Is it because all calls were scheduled on the same “next tick”?
Let’s push each file to the subsequent tick.
var folder = '/temp/json/';
var files = fs.readdirSync(folder);
function json2csv(index) {
if (index >= files.length) return;
var file = files[index];
var json = JSON.parse(fs.readFileSync(path.join(folder, file)));
var out = fs.createWriteStream(path.join(folder, file.slice(0, -5) + '.csv'));
for (var item of json.item) {
out.write(util.format('%s,%s\n', item.id, item.title));
}
out.end();
process.nextTick(json2csv.bind(null, index+1));
}
process.nextTick(json2csv.bind(null, 0));
Still no. Time to try the 2nd suggestion. out.write() did return false after some writes.
var folder = '/temp/json/';
function json2csv(files, start) {
for (var i=start; i
And... it works! So much for starting with a 10-line script.
It may not be the best tool for the job (subjective), but sometimes it's more efficient to work with a tool you already know; imagine a NodeJS developer without Groovy knowledge would find this easier to write in Node than in Groovy/Bash/Perl/Python.
Disclaimer: I decided to continue pushing writes even when out.write() returns false to simplify the implementation, because I know each input file was only around 1MB, which is safe to buffer. If the input is unknown, writes within the same file may need to be deferred until drained (maybe by transforming the items into an input stream)
I own the administrator account of a Bitnami Redmine that I installed, but I usually work using a regular user account (Unix rule of not using root). Unfortunately I made the unforgivable mistake of creating a regular issue using the Admin account. For “correctness” sake I tried, and searched if I could modify the creator… (talk about non-repudiation…)
Nope, no default method, or requires a plugin. I don’t intend to do this regularly, so I don’t really need a plugin. I decided to mess with the database directly and see if it was easy to understand the schema. Turns out it was too straightforward.
Imagine a car park with different parking costs for parking per hour or part thereof. Assume also there is no pattern, thus a mapping table of hour -> cost:
hr
cost
0
0.30
1
0.60
2
0.80
3
1.20
4
1.30
5+
1.60
Parking beyond 5 hours will max your charges at $1.60.
In Excel there is the VLOOKUP function, with Range_lookup=TRUE to find the nearest match.
In R we can do a rolling join on a data table. Without the roll, it works like Range_lookup=FALSE; it finds an exact match.
As usual there are many guides out there on installing something on some OS, but with Linux I never got a guide that could bring me straight through (every environment, every version requires different setup). So here’s my very own steps for installing PostgresSQL 9.4 on CentOS 6.6. (also for my future self-reference)
Prerequisites: Ensure DNS and HTTP(S) working for yum, otherwise you may encounter Host not found, etc. (This is out of scope as it may be nameservers or firewall settings)
sudo -u postgres psql postgres
CREATE DATABASE devdb;
CREATE USER devuser WITH PASSWORD 'devpass';
GRANT ALL ON DATABASE devdb TO devuser;
4. Allow remote connections
Ref: http://www.thegeekstuff.com/2014/02/enable-remote-postgresql-connection/
pg_hba.conf allows any IP to connect (0.0.0.0) and authenticate using md5. You can also restrict this to your webserver IP only.
postgresql.conf will let the server listen on all attached IPs.
sudo vi /var/lib/pgsql/9.4/data/pg_hba.conf
host all all 0.0.0.0/0 md5
sudo vi /var/lib/pgsql/9.4/data/postgresql.conf
listen_addresses = '*'
sudo service postgresql-9.4 restart
5. Move the data to another disk
Ref: http://stackoverflow.com/questions/28414558/moving-postgresql-main-folder-out-of-var-lib-postgresql-9-4
My main disk was a default 10GB, enough for OS and programs but not for the database data. I have a spanking new 300GB disk attached, and I want to move the table space to the new disk.
There were several methods involving specifying the data directory but I found it was easier to just link it.
I’m tried of building admin UIs and I’m trying out ng-admin. It’s pretty straightforward to setup given the guides and demos.
List, create, updates were fine until I got to the DELETE method. The server was throwing 400 Bad Requests and upon Chrome network inspection I discover that ng-admin was sending a JSON body in the request. I don’t really care who is “following the standard” as long they work together (think browsers and jquery), so I’m fine to fix either side to either the client not send the body, or the server accepting the non-empty body.