Monday, December 29, 2014

Chrome Remote Operation--work in process

Now that I'm making a debugger abstraction it's time to think about eating my own dogfood by connecting my IDE to a Chrome debugger instance running an application--or even another instance of the debugger.

Even better, it looks like the the chrome-remote-debugger API will not only let me debug a remote instance of Chrome, it will let me control and monitor everything that happens.

The node module is here.

To run more than one debugger on an instance requires crmux, here.

And a partially implemented console that I might be able to read to understand what to do is here.

So far I've been able to do this:

1. Run my debugger with an external debug port:

google-chrome --remote-debugging-port=9222&
9222 is the standard debugging port.

2. Run crmux in a separate window

crmux

This multiplexes the debugger on 9222 to be available on 9223

3. Now I can use the chrome-remote-api REPL

./node_modules/chrome-remote-interface/bin/repl.js -p 9223
Proof that it works: if crmux is not running, it gives an error.

4. Now I can open a page at

     http://localhost:9223

This gives me a list of debuggable pages. If I choose the one that I want I get this URL:

http://localhost:9223/devtools/devtools.html?ws=localhost:9222/devtools/page/7F642E19-7F9A-40AB-BE87-B344B6307246

Note the ws=localhost:9222. This is the original port and can interfere with things. So I must manually change to ....?ws=localhost:9223...

Proof that this works: I can kill crmux and things no longer work.

5. If I now run

crconsole -p 9223

I get a prompt. (It fails if crmux is not running)

And if I type the command

.tabs

I get something like this:

localhost> .tabs
[0] http://localhost:9223/devtools/devtools.html?ws=localhost:9223/devtools/page/7F642E19-7F9A-40AB-BE87-B344B6307246
[1] http://localhost:3333/IDE
[2] chrome-extension://jnihajbhpnppcggbcgedagnkighmdlei/_generated_background_page.html
[3] chrome-extension://ighdmehidhipcmcojjgiloacoafjmpfk/_generated_background_page.html
[4] chrome-extension://diebikgmpmeppiilkaijjbdgciafajmg/background.html
localhost> 

OK. Those are the right tabs.

But no matter what order I use, I haven't been able to get both the Chrome debugger and crconsole ot play nicely together.

But: I was able to run an instance of crdebug and an instance of the chrome-remote-interface REPL together, and evaluate the following this in the REPL

chrome> Runtime.evaluate({expression: 'foo = 20'}, console.log)
Output is:
chrome> false { result: { type: 'number', value: 20, description: '20' },  wasThrown: false }
And the result, 'foo=20' appears in the other console. So I have two different consoles talking to the same chrome instance.

The problem comes when I run one chrome debugger instance along with a crdebug instance. Depending on which one I start first, the other fails.

If I run devtools first, then crconsole fails:
TypeError: Parameter 'url' must be a string, not undefined    at Url.parse (url.js:107:11)    at Object.urlParse [as parse] (url.js:101:5)    at WebSocket.initAsClient (/home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crconsole/node_modules/chrome-remote-interface/node_modules/ws/lib/WebSocket.js:475:23)    at new WebSocket (/home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crconsole/node_modules/chrome-remote-interface/node_modules/ws/lib/WebSocket.js:59:18)    at Chrome.connectToWebSocket (/home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crconsole/node_modules/chrome-remote-interface/lib/chrome.js:112:15)    at Object.ChromeREPL.setTab (/home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crconsole/index.js:223:17)    at Object.<anonymous> (/home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crconsole/index.js:23:12)    at /home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crconsole/index.js:167:7    at IncomingMessage.<anonymous> (/home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crconsole/node_modules/chrome-remote-interface/lib/chrome.js:104:13)    at IncomingMessage.EventEmitter.emit (events.js:117:20)
The code c

Then crmux fails this way:
/home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crmux/crmux.js:103           msgObj.id = idMap.id;                            ^TypeError: Cannot read property 'id' of undefined    at WebSocket.<anonymous> (/home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crmux/crmux.js:103:29)    at WebSocket.EventEmitter.emit (events.js:98:17)    at Receiver.self._receiver.ontext (/home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crmux/node_modules/ws/lib/WebSocket.js:682:10)    at Receiver.opcodes.1.finish (/home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crmux/node_modules/ws/lib/Receiver.js:391:14)    at Receiver.expectHandler (/home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crmux/node_modules/ws/lib/Receiver.js:378:31)    at Receiver.add (/home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crmux/node_modules/ws/lib/Receiver.js:87:24)    at Socket.firstHandler (/home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crmux/node_modules/ws/lib/WebSocket.js:663:22)    at Socket.EventEmitter.emit (events.js:95:17)    at Socket.<anonymous> (_stream_readable.js:746:14)    at Socket.EventEmitter.emit (events.js:92:17)
The crmux code can be debugged with:

node-debug --preload 0 /home/awesome/tools/node-v0.10.26-linux-x86/lib/node_modules/crmux/crmux.js

The problem occurs when we can't map an upstream ID to a downstream one. That is going to take a bit before we can deal with it. It probably has to do with the DevTools debugger making sure that the IDs that it gets are correct ones, and that it does not see an ID that it does not understand.

I can probably solve this by opening a debugger on the debugger and debugging it.


Tuesday, July 8, 2014

Interface to Coffeescript debugger

```node-inspector``` is the way to start a debugger, along with a script to compile coffee to js. There are several problems with the way things work.

The ideal: I've got a testServer that automatically runs tests when either the code or the test is changed. I'd like the following to happen:
1. If I add a statement, like ```debugger```,  to the code then instead of running the program, it runs it in the debugger.
2. When the debugger runs, instead of going to the first line of code it runs to the statement.

Right now I can't do that. Node inspector either stops on the first line of code or it runs until it gets a USER KILL signal, and then stops on a ```debugger``` statement, or must be stopped manually.

If I use a ```debugger``` statement then every time the statement is hit, the debugger stops.

I could solve this by having my own debugger, and ultimately that might be the way that I want to go.

This issue https://github.com/node-inspector/node-inspector/issues/240 describes how ```node-debug``` might be modified to start the process running as soon as it comes up. It also points to the critical code for debugging.


Sunday, July 6, 2014

CoffeeScript Source maps

CoffeeScript has its virtues. It's terse and expressive. And using coffeescript/register you can tell node to load your coffeescript files instead of looking for javascript.

But there are problems. Source maps make it easy to debug, but to use source maps you must first compile the code to javascript. This clutters up the file system.

There are other problems as well. If you configure mocha to use coffeescript tests then mocha will load the coffeescript without debug information, which means you're back to using javascript and your stack traces will give javascript lines instead of coffeescript lines.

There are solutions, but they're going to be a bit of work. The coffeescript compiler has an option, "inline" that generates source maps along with compiled javascript. The source maps can be appended to the javascript, and everything should work right.

The inline-source-map project (https://github.com/thlorenz/inline-source-map) shows how this can be done.

The coffee-inline-source-map project (https://github.com/thlorenz/inline-source-map) compiles coffeescript with inline maps.

Source maps are created by http://coffeescript.org/documentation/docs/sourcemap.html

Stack traces for coffee files are created by Error.prepareStackTrace, found in http://coffeescript.org/documentation/docs/coffee-script.html




Friday, June 20, 2014

Javascript diagramming libraries

Here's a simple UML diagram renderer: https://github.com/skanaar/nomnoml

JointJS is a javascript diagramming tool that scales from simple diagrams to diagramming User Interfaces. https://github.com/DavidDurman/joint  And the demo site, with tutorials and documentation is here: http://jointjs.com/

Wow! It's nicely designed. Elements are SVG which makes them render beautifully--and they are based on CSS as well, so all the flexibility that's built into a browser is automagically built in to your diagrams.

Also check out this color picker: http://www.daviddurman.com/flexi-color-picker/


Monday, June 16, 2014

Ideal Scene, Part I

What I want to produce is part of the World's Most Awesome Development environment.

What would such an environment entail. I wrote a bunch of it up a while ago, and I probably could find it if I searched, but I'm not going to. Instead I'm going to create it newly. And I'm going to try to create it in order.

First: I'd want the project to always be in a known state. That means there are tests, and the tests ae run regularly and there's a visible report of the tests results.

Just passing tests does not tell me where the project is. I need to know how much of the code is being tested. But I don't think that I should have 100% test coverage. Some tests are not worth writing. So there needs to be a way of identifying what parts of an application need to be tested and what parts do not, and then to identify the coverage based on that.

There also needs to be a way to measure the size of the application. SLOC, number of tests, and so on.

Finally, the presentation of this data has to be very straighforward. It has to tell me something that I want to know.

Second: I'd want to know how the project is changing. To that end, I want not just position data, but velocity as well. Are requirements being added now at the same rate as earlier. Are the being added more quickly.

Third: I want my development cycles to be exceptionally short and blindingly fast. Ideally I'd like each line of code to be its own development cycle.

How I get there is another problem.

Thursday, June 12, 2014

Inside out development

Building a new server component I realized I was going about it the wrong way.
Long ago I learned that only a small percentage of a program's code actually does work. The rest of it is configuration, validation, housekeeping, error recovery and the like.
And that's how I started developing my component: reading configuration files, setting up housekeeping. It was a long, discouraging slog because there always are bugs. Always are, and always will be.
Hours into the process I realized that I needed to start with the core: the small number of lines that actually do work.
The component's job is to return some data. I wrote a spec and a test for that code.
The spec is written in mocha. Here's the outline.

It uses a utility functionsetupServers to set up the server and messagePromise to send and test messages using Promises

  before ->  setupServers(<args>)
  it "provides the state of the networks and services", ->
    messagePromise
      send: type: <data>
      expect: type: <data>
Then I wrote the code to return some dummy data. It took a couple of lines. A few tweaks, it ran, and I checked it in.
router.on <message>, (message)=>
      router.sendInfo <response>, <data>
That's not the end of the story, though. Working on the test in a test file and the code in the server file is inefficient. We want to do it a different way.

Gulp Plumber

Maybe a way to handle Gulp errors more easily:

http://cameronspear.com/blog/how-to-handle-gulp-watch-errors-with-plumber/

Wednesday, June 11, 2014

Build and test with exceptions

Everything in the system should be built and tested by default.
Sometimes there are good reasons to turn off a build or a test step. The danger is that something gets turned off and does not get turned back on.
So all exceptions to the system’s normal operation should be in a single location. Let’s call it exceptions.cson. The file might look something like this:
gulp:
    lint:
        exceptions: ["glob"...."glob"]
That means that every build and test step has to be guarded. For gulp we can add optional !globs to the gulp.src statement.

Friday, May 2, 2014

Requirements and tests

Awesome Requirements And Tests

I write requirements and tests using CoffeeScript. The CoffeeScript gets compiled and run by a testing tool called mocha.

Simple Requirements

I start by writing requirements, in Coffee:
describe "Examples", ->
  it "should succeed because 1 is not equal to 1"
  it "should fail because 1 is not equal to 2"
describe and it are the names of functions in the mocha library. ()-> or -> indicate that what follows is a function block. So this translates to:
  1. Call the describe function, passing two arguments, a string, and a function block. The function block is the series of indented lines that follow.
  2. Within the function block, call the it function, passing a descriptive screen.
If I had provided a function block for the it statements, even an empty block, mocha would consider this a test. Without a function block, it’s a “pending” test: in other words, a requirement.
If I invoke mocha on this file, I get this output:
 Examples
  - should succeed because 1 is not equal to 1 
   - should fail because 1 is not equal to 2

0 passing (5 ms)
2 pending

Converting A Requirement To A Test

I turn the requirements into tests by adding a blocks of code, also written in CoffeeScript. To pass in the block I must first change a line like:
it "should succeed because 1 is not equal to 1"
to
 it "should succeed because 1 is equal to 1", -> #Note the -> at the end
Now whatever is indented underneath the it statement is the test.
mocha uses another library called chai that supports several different styles for text expression. The one I prefer is theshould format. Writing a test that passes and one that fails in should format gives me this:
describe "Examples", ->
  it "should succeed because 1 is equal to 1", ->
       1.should.equal 1
  it "should fail because 1 is not equal to 2", ->
       1.should.equal 2
For the first test I could also write:
  1.should.equal(1)
or I could use expect format:
  expect(1).to.equal(1)
or the assert format. The last parameter is the message to display if the test fails, which this one cannot:
   assert( 1 === 1, "one is not equal to one")
   #or 
   assert.equal( 1, 1, "one is not equal to one")
Whatever format I use, the result will look like this:
Examples
    ✓ should succeed because 1 is not equal to 1 
    1) should fail because 1 is not equal to 2

 1 passing (28ms)
 0 pending
 1 failing

  1) Examples should fail because 1 is not equal to 2:

      AssertionError: expected 1 to equal 2
      + expected - actual

      +2
      -1

A Real Example

I am working on the specs for my build system. I use a tool called gulp to run my builds. As make uses makefiles, gulp uses gulpfiles. So here’s how I describe my gulpfile:

describe "gulpfile", ->
  it "uses a javascript file that uses 'require' to load a utility file"
  it "uses a utility to read the task files from the tasks directory"
  it "prints the number of files loaded"
  it "uses 'gulp-load-plugins' to load plugins"
  describe "Tasks", ->
    it "has a 'links' task that creates links in the components directory so that Brackets can run on the gulp server"
    it "has a 'mocha:gulpfile' task that runs mocha on this file"
    it "continues running mocha tests even when a test fails"
    it "has a 'mocha' task that runs mocha on all specs in the test directory and sub-directories"
    it "has a less task that runs the less compiler"
    it "has a jade task that converts jade to HTML"
    it "has a coffee task that converts client side coffee code to js"
    it "has a lint task that runs jshint on all coffeescript"
    it "has an html task that runs useref on all html files"
    it "has a clean task that cleans the dist directory before a build"
    it "has a build task"
    it "has a connect task"
    it "has a watch task that rebuilds anything that changes"
    it "has a default task"
When I ask mocha to run the gulpfile, my output looks like this:
gulpfile
    - uses a javascript file that uses 'require' to load a utilty file
    - uses a utilty to read the task files from the tasks directory
    - prints the number of files loaded
    - uses 'gulp-load-plugins' to load plugins
    Tasks
      - has a 'links' task that creates links in the components directory so that Brackets can run on the gulp server
      - has a 'mocha:gulpfile' task that runs mocha on this file
      - continues running mocha tests even when a test fails
      - has a 'mocha' task that runs mocha on all specs in the test directory and sub-directories
      - has a less task that runs the less compiler
      - has a jade task that converts jade to HTML
      - has a coffee task that converts client side coffee code to js
      - has a lint task that runs jshint on all coffeescript
      - has an html task that runs useref on all html files
      - has a clean task that cleans the dist directory before a build
      - has a build task
      - has a connect task
      - has a watch task that rebuilds anything that changes
      - has a default task

  0 passing (6ms)
  18 pending
I have written no tests, so 0 are passing and 18 are pending.
Once I convert the first requirements to tests they look like this:
  it "uses a javascript file that uses 'require' to load a utilty file", ->
    require('fs').statSync('gulpfile.js')

  it "uses a utilty to read the task files from the tasks directory", ->
    exec('gulp utils:register')
      .then( (result)->outputShouldMatch( result, /\n\[gulp] registered\n/       ))
The second test has two parts that run asynchronously. A process needs to be spawned with exec, run to completion, and on completion the output needs to be matched for a string whose presence indicates that the job ran properly.
The exec function that is used is not the one that is part of the node library.  Rather it is a derivative function that uses PromisesPromises is an increasingly popular abstraction that makes writing asynchronous code as easy as writing synchronous. When a function like exec runs it returns promise object which has a then method. The argument to the then must be a function, which does not run until the promise is fulfilled, in this case when the exec'd process completes.
The output of my tests now becomes:
gulpfile
    ✓ uses a javascript file that uses 'require' to load a utility file 
    ✓ uses a utility to read the task files from the tasks directory (779ms)

    <other tests>

2 passing (787ms)
  16 pending
The test shows the time the second test took because it was significant.

More requirements

I’m continuing to convert more requirements to tests, and right now the output looks like this:
gulpfile
    ✓ uses a javascript file that uses 'require' to load a utilty file 
    ✓ uses a utilty to read the task files from the tasks directory (847ms)
    ✓ prints the number of files loaded (810ms)
    ✓ uses 'gulp-load-plugins' to load plugins 
    Tasks
      ✓ has a 'links' task that creates links in the components directory so that Brackets can run on the gulp server (822ms)
      ✓ has a 'mocha:gulpfile' task that runs mocha on this file 
      ✓ continues running mocha tests even when a test fails (1033ms)


      <tests I have not converted>

  7 passing (4s)
  11 pending
==========================
Enhanced by Zemanta

Sunday, April 27, 2014

Bootstrapping Awesomeness: Step I -- the Virtual Machine

Screenshot of the VirtualBox Web Console
Screenshot of the VirtualBox Web Console (Photo credit: Wikipedia)
If I had an awesome development environment, creating another one would be easy. I'd have a model to work from and and I'd do it quickly because my development environment was already awesome.

But starting out means bootstrapping. Right now, after a couple of years of flailing I've got something that's sorta-kinda-good. And I can make it better faster because of that.

The first bit of awesomeness is that it runs in a VM. That makes it platform independent, easily cloneable, and I can snapshot it any time I'm doing something scary. More than once I've screwed things up. With a real system I'd have to undo a lot of crap. With a virtual machine I just go back to the last good snapshot and move forward.

The second bit of awesomeness is that on top of the VM I'm building components that run in a compliant browser. I'm using Chromium/Chrome as my compliant browser, but it should all work in Firefox and in newer versions of IE, in case anyone cares. I don't so let's just consider that Chrome is part of the awesomeness.

I'm using Virtualbox as my VM Manager, but my virtual disks are VMDKs, which means they should be able to run with VMWare as well. I haven't done the experiment because I find Virtualbox more than good enough for me.

Every couple of days I snapshot, then shut down my VM, turn it into a Virtual Appliance and upload the image to the cloud. A Virtual Appliance is a file in Open Virtualization Format. In theory it can be imported by any conformant VM Manager. Again I've only imported it into VirtualBox, but in theory that's the easy way to get to, for example, VMWare.

Right now my OVF appliance is a shade under 5 GB. It takes overnight and then some to upload from my home network, but I discovered on my last trip to the University of Maine that their guest network uploads at around 40 times home network speed. Woo hoo!

My VM has everything that I need to do my work, all pre-configured and kept up-to-date. And recently I've created a startup script so when the VM boots up it starts all the pieces. Since the stuff that I am using uses a browser at the front end, it needs a web server for the back end. So that's one of the things that loads up: a server that serves web pages and also WebSockets.

My back-end components are lightweight servers that run independently of the main server and talk to it and to each other using WebSockets. And they can talk to other components, as needed, by translating to regular sockets, or by using a server that spawns processes and creates WebSocket interfaces to stdin, stdout, and stderr. That means that any component can talk to any other component.

I use gulp as my back end, and that's what I'm going to talk about next.
Enhanced by Zemanta