A place to be (re)educated in Newspeak

Saturday, November 17, 2012

Debug Mode is the Only Mode


There has been a fair amount of discussion recently surrounding some of Bret Victor’s talks and blog posts. If you haven’t seen these, I recommend them highly - with a grain of salt.

These pieces make important points related to programming and programming environments, and are beautifully done. 

They also relate to education and other matters which I will not discuss here.

Because of their exquisite presentation, they’ve elicited far more attention than others making similar points have garnered in the past. This is a good thing.  However, beneath the elegant surface, troubling questions arise.

The demos Victor shows are spoiled by the disappointing realization that we are not seeing a general purpose programming environment that can actually work these miracles for us. Instead, they are hand crafted illustrations of how such a tool might behave. It is a vision of such an environment - but it is not the environment itself. Relatively little is said about how one might go about creating such a thing for the general case - but there are some hints.

We should take these ideas as inspiration and see what one might do in practice. I expect this is one of the things Victor intends to achieve with these presentations. 

Victor recognizes that many of his examples depend on graphical feedback and don’t necessarily apply to other kinds of programming.  However, his use of traces and timelines  is something we can use in general. In one segment, the state during a loop gets unrolled automatically by the programming environment  - morphing time into space so we can visualize the progress (or lack thereof) of the computation.

This specific example might be handled in existing debuggers using a tail recursive formulation of loops - without tail recursion elimination! Then the ordinary view of the stack in a debugger could be used - though the trace based view may have advantages in terms of screen real estate, since we need not repeat the code. Those advantages will apply to any recursive routine, so adding an unfolded view of a recursive call (or a clique of such calls) is a small concrete step one might want to investigate.

Traces that show all the relevant data are intrinsically connected to time traveling debugging, because we want more than selective printouts - we want to be able to explore the data at any point in the trace, following the object graph that existed at the traced point where ever it may lead us.

I firmly believe that a time traveling debugger is worth more than a boatload of language features (especially since most such boatloads have negative value anyway).  

When I first saw Bil Lewis’ Omniscient Debugger, I tried to convince my management to invest in this area. Needless to say, I got nowhere.  

The overall view is that a program is a model of some real or imagined world that is dynamic and evolving. We should be able to experiment on that model and observe and interact with any part of it. One should be able to query the model’s entire history, searching for events and situations that occurred in the past - and then travel back to the time they occurred - or to a time prior to the occurrence, so we can preempt the event and change history at will.

The query technology enabled by a back-in-time debugger could also help make the graphical demos a reality. You ask where in my code did I indirectly call the invocation that  wrote a given pixel. It’s a complex query, but fundamentally similar to asking when did a variable acquire a given value.

There is a modest amount of work in this area, some of it academic, some commercial (forgive me for not citing it all here), but it hasn’t really taken off. It is challenging, because programs generate enormous amounts of transient data, and recording it all is expensive. This gives a new interpretation to the phrase Big Data. Data is however central to much of what we do, and data about programs should not be the exception. 

A related theme is correlating data with code by associating actual values with program variables.  One simple example of the advantage of having values associated with variables is that we can do name completion without recourse to static type information. We get the connection between variables and their values in tools like workspaces, REPLs, object inspectors and (again!)  debuggers, but not when viewing program text in ordinary editors or even in class browsers.  

In Newspeak and Smalltalk, developers sometimes build up a program from an initial sketch using the debugger, precisely because while debugging they can see live data and design their code with that concrete information in mind. You’ll find an example of this sort of thing starting around 19:10 in Victor’s talk, where an error is detected as the code is being written based on runtime values.

The Self environment shows one way of achieving this integration of live data with code. Prototype based languages have a bit of an advantage here because code is tied to actual objects - but once we are dealing with methods that take parameters we are back to dealing with abstractions just like class or function based languages.  

You might even be tempted to say that Javascript, being a prototype based language, is a modern incarnation of Self. Please don’t drink and drive; at best this is a cautionary tale on the theme “be careful what you wish for”.

We need a process that makes it easier to go from initial sketches to stable production code. We’d like start from workspaces and being able to smoothly migrate to classes and unit tests. This is in line with the philosophy expressed in this paper for example.

It seems to me that the various tools such as editors, class browsers,  object inspectors, workspaces, REPLs and debuggers create distinct modes of operation. It would be great if these modes could be eliminated by integrating the tools more tightly.  There would always be a live instance of your scope associated with any code you are editing, with the ability to evaluate incrementally as you edit the code (as in a REPL) and step backwards and forwards as in a time traveling debugger.  The exact form of such a tool remains an unmet UI design challenge.

All of the above holds regardless if whether you are doing object-oriented or functional programming (a false dichotomy by the way) or logic programming for that matter.

Tangent: I'm aware that the notion of debugging in lazy functional languages is problematic. But the need for live data and interactive feedback remains. And once a interactive computation occurred, the timing has been fully determined. So while stepping forward may be meaningless, going back in time isn't.

We should stop thinking of programs as just code. The static view of code, divorced from its dynamic extent, has been massively overemphasized in the PL community. This needs to change, and it will.

Thursday, August 23, 2012

Newspeak on Dart

We've wanted to run Newspeak to run in a web browser for a long time - actually, since before the Newspeak project started.  The need for better programming languages for the internet platform has been evident for a while.  In a better world, the Newspeak project would be focused on that from day 1.  

However, this world is suboptimal.  It was only in 2010 that I spent some time sketching out the first version of NS2JS, the Newspeak-to-Javascript compiler. It was a toy, but in 2011, Vassili Bykov managed to bring it to the verge of self hosting. Performance, however, was abysmal.  Having since joined the Dart project and learned more and more about the challenges Javascript poses as a compilation target (or anything else) I guess this really should not have been a surprise.

We were able to reuse a part of that effort this summer when Ryan Macnak came to Google in Mountain View as an intern with the Dart team. His mission: NS2Dart, a Newspeak to Dart compiler. A detailed report is available at

The results are promising, though there is still a long way to go. Despite considerable losses due to impedance mismatches, on the Dart VM, NS2JS comes within a factor of 2 of the Squeak based Newspeak implementation (NS2Squeak). Given that the current version of NS2Squeak is twice the speed it used to be (thanks to Cog)  it looks like performance is tolerable already. And since Dart will get much faster (the numbers in the tech report are already out of date), the future looks bright.

Now, when someone says the future looks bright, you should be getting nervous. So, just sign the dotted line, and then I'll get to the caveats ...

The current version relies on access to Dart VM's embedding API, which means you cannot run it in the browser. Hopefully, over time enough functionality will get added in the way of mixins and mirror builders that the same results can be accomplished in pure Dart. That is not yet certain, but let's retain the uncharacteristically optimistic tone of this post, and assume that does happen.

Since most browsers are likely to run Dart via translation to Javascript for some time, we need to look at the NS-Dart-JS pipeline: compiling Newspeak via NS2Dart, and feeding the result to dart2js (The Dart to Javascript compiler) to obtain Javascript code that runs in any browser.

Early on, NS2Dart could run on top of dart2js (by cheating on things like dynamic mixin application) and results were much better than with NS2JS. This is to be expected: dart2js does the heavy lifting for us.  The dart2js compiler already has 4-5 times as much code as NS2JS and is being developed by a very skilled team who understand JS performance inside and out.  Furthermore, it has taken a while to get to this point and performance work is by no means done yet.  So I don't feel too bad about our efforts on NS2JS.

Nevertheless, there is a lot of uncertainty over how well Newspeak can run on top of current Javascript implementations. A key problem area are non-local returns (NLRs). These are essential for user defined control constructs. They are not supported in Dart precisely because it is not clear how efficiently they can be implemented on top of Javascript.

Mirror support on NS2Dart is very partial - the mapping to Dart's mirrors is pretty clean so far, but Dart mirrors are a work in progress, and so far only cover introspection. 

Then there remains the small matter of the UI. I'd love to see the Hopscotch GUI on the browser, but that is a ton of work as well.  We'll see what UI solutions shape up in that space - we can always just call out to whatever UI Dart exposes (Ryan also implemented a Dart Alien API). So altogether, it might be a tad premature to declare victory, but we are making progress.

The dream remains to get a fully functioning system, Hopscotch, IDE and all that works well in the browser. However, I can imagine that many applications could get by with a lot less.  

The details are in Ryan's report and the code is in the Newspeak repo, where you can also get the latest Newspeak images with which to view it (I recommend you do that rather than use the old release; a new release will be out very soon). Thanks are due  to Vassili (whose work on NS2JS made this possible)  and to my colleagues on the Dart team, who graciously supported this effort in various ways, and most of all to Ryan.

Altogether, getting a top quality programming experience on the web requires a major effort.   Dart is beginning to make this possible.


Monday, July 23, 2012

Seeking Closure in the Mirror


I've discussed mirror based reflection many times in the past, in this blog and in talks. And of course I'm not the only one - you can  read Alan Wirf-Brock's  posts on mirrors in Javascript. In this post, I want to focus on a particular kind of mirror that has not received much attention. Before I get to deep into the details, a few words of background.


You cannot get at the internals of a function: you can only apply it to various arguments and see how it responds. This is sometimes known as procedural abstraction. Among other things, it is the basis for object-based encapsulation.

Most languages that call themselves object-oriented do not actually support object-based encapsulation. One of the ways they get by despite this defect is to  rely on procedural abstraction directly. Perhaps the most notable example of this is Javascript. The only way to encapsulate anything in Javascript is to put it inside a function. Elaborate design patterns leverage Javascript’s closures to provide encapsulation.

You can see from the above that procedural abstraction is absolutely fundamental. There appear to be circumstances where we might nevertheless might wish to breach the defenses of procedural abstraction. 

Consider implementing a database interface in the style of LINQ, or Ruby on Rails, or Glorp. The underlying model is that the database consists of collections, and that these collections are accessed via standard functional operations such filter, map, reduce etc.  The arguments to these operations include closures. For example, you might write a query such as:

cities.filter(function(city){return city.name = ‘Paris’;});

and get back a collection of answers that included Paris, Texas, and perhaps some other cities. To implement this interface on top of a database, you might want to transform this code into a SQL query.  To do that you need to understand what the closure is doing. In .Net, for example, the type system is designed to coerce a literal closure into an abstract syntax tree representing the expression inside it, which can then be compiled into SQL. 

Of course, it might be that you cannot reasonably compile the code into a SQL query at all.  We will assume that the system is allowed to fail in any case it deems too hard, but we’d like to cope with as many situations as we can.

The LINQ approach relies on static typing, but this is not essential, and in fact has drawbacks.
For example, the static approach precludes the following:

query(f) {return cities.filter(f);}

A more general alternative is to dynamically derive the AST of the closure body. Regardless, it seems I need a way to get the AST (or at least the source) of a closure - something that procedural abstraction is of course designed to preclude.

Even if I can get the source or AST, that isn’t always enough. Suppose I want to write

var cityNames = [‘Paris’, ‘London’, ‘New York’];
cities.filter(function(city){
    return cityNames.contains(city.name)
});

I need the value of cityNames in order to execute the query.  In general, I need to get at the scope of the executing closure. 

Smalltalk and its relatives allow you to do this. How do they get around procedural abstraction? Well, in the case of closures, they basically throw procedural abstraction out the door. Every Smalltalk closure will gladly provide you its context, which is a reified scope that will allow you to find out what all the variables used in the closure are.

Obviously, this is not a very secure solution. One way we can usually reconcile security and reflection is via mirrors, and that is the focus of this post.  Given an object mirror that has full access to the closure object's representation, you should be able to get all the information you need.  This still has the drawback that the representation of closures is exposed as a public API. 

In this case, we want a ClosureMirror. Essentially, there needs to be an object with the magical ability to see into the closure, overcoming procedural abstraction.  The closure itself must not allow this; it must be impenetrable. The capability to look inside it must be a distinct object that can be distributed or withheld independently (exercise for the reader: find another way to solve this problem).

Concretely, a ClosureMirror needs to able to provide the source code of the closure it is reflecting and a map from identifiers to values that describes the closure’s current scope.

Another situation where closure mirrors would be handy is serialization. If you need to serialize an object that includes a closure, you again need access to the closure’s scope.

I have not seen closure mirrors discussed elsewhere. As far as I know, the only implementation was done as part of the Newspeak-to-Javascript compiler. We are also considering it in the context of the Dart mirror system.   The Newspeak-on-Javascript implementation of closure mirrors is rather naive and inefficient.  One reason for this inefficiency is that Javascript provides no support whatsoever to do this sort of thing. In any case, the idea is new and virtually untested, but I think it has potential.


Sunday, June 17, 2012

Source Control Freak


Source control is an area of software development in need of reform. There is need for a clean, clear semantic model. To the extent that existing source control systems have some sort of model, each system is different. Each has its own terminology, usually entangled with the mechanics of file systems and directories. As with IDEs, the use of files and text has spread in this domain because it is a lowest common denominator.  

Semi-tangent: Well, almost a common denominator; it doesn’t cover Smalltalk, but this can be fairly viewed as Smalltalk’s fault, not source control’s.

As with IDEs, the tie to files is unfortunate because low level abstractions like text files and file systems are completely extraneous to the problem at hand. 

The major advantage of the text file based approach is that, rather than invent a source control system for every language, we can build one system that assumes source code consists of text files and go from there. A big disadvantage is that such a system has no understanding of the source code. It doesn’t understand the structure of a program - be it functions or classes or prototypes or procedures or what-have-you.

The mainstream approach also has another advantage: it can integrate artifacts from multiple languages. And another: we can go even lower than text files, and just consider files, so we can manage binaries and resources as well.  In general, while we would like programming language-specific understanding, we also want to deal with multiple languages, and with artifacts that go beyond source code.

Again, we return to the lack of a semantic model: not just for understanding the sources, but for the language-independent part of the system. What are versions, what are differences, what are repositories exactly? The answers differ from system to system, and are hard to disentangle from the mechanics of files and directories.

People have addressed parts of the problem, but I don’t know of a completely satisfactory solution. For example DARCS has a model of differences that is rather interesting. However, it doesn’t tackle other issues, and the experience of the Haskell community using it has been mixed at best.

In the Smalltalk world, Monticello (and more recently, Metacello) provide a language aware source code management (SCM) system.  I’ve explained some of the problems with that approach above.  We tried to mitigate these somewhat in the Hopscotch IDE, where we mated Monticello with svn and a new GUI.  The idea was to use a mainstream standard tool with a language specific front end. No need to reinvent the entire wheel, only select parts. That too has been a mixed experience.

On the one hand, we’ve enjoyed a nice GUI. For example,  the changes presenter displays semantically meaningful diffs  - the system tells us what classes have changed, and what methods within them, resorting to textual diffing only within methods.  The diff is displayed side-by-side in the traditional manner; the key difference is that we get individual diffs for each unit of program structure. For example, if you’ve changed two methods in a class with 30 methods, you’ll only see the diffs for those two methods.

In the screenshot, we can see that the startup: method in the class ObjectiveCAlien is the only thing that has changed.


On the other hand, the cost and effort of rolling one’s own VCS tool is considerable even when it is done on top of  a standard VCS that does the heavy lifting.  Because of this, we have not yet been able to realize all of the advantages we could from such a system. We could potentially show you time-machine like views of individual classes or methods, since concepts like versions and history apply to these entities.  

The system shown above is based on svn, and svn doesn’t support distributed development well; this became an acute problem when the project went open source.

So - do we need to build variants of such a system for other SCMs?  Given N languages and M SCMs, you get N x M systems.  Unattractive. One can see why people have stuck with the standard tools.

If we had a uniform abstraction of an SCM then we could implement the abstraction once on top of every real SCM we wanted to use. We could then implement language specific functionality on top of the abstract model. Now you get N+M pieces you need to build. 

This is what Matthias Kleine set out to do in his masters thesis. The result is Pur, which defines a model that is general enough to describe several of the leading SCMs (mercurial, git, svn).   MemoryHole, a Newspeak specific version control tool, has been built on top of Pur using a binding to mercurial. Since August 2011, we’ve been using MemoryHole instead of the svn-based tools.  One nice thing is that MemoryHole can work with git as well, and potentially even with old-fashioned svn. Here’s a screenshot of MemoryHole in action:


We see two columns listing top level classes that differ between the running system and a repository. Each class is presented as a tree view of parts that differ. At the level of individual methods, we revert to a text diff. The configDo: method of class VCSMercurialBackedProvider`Backend`LocalRepository`Commands`NonCachingCommand is expanded to show what’s changed (a flush was added, highlighted in red). 

Tangent: We see 4 levels of class nesting here, which is as deep as I’ve seen in any Newspeak program.

MemoryHole gives us a distributed source control GUI application that is language-aware, but can work with differing standard SCMs. And of course, it is written in Newspeak so it is modular and extensible.

Keeping a language specific VCS running in sync with an evolving language was a problem in the early days of Newspeak. This is again part of the price one pays for dedicated language support. When the language is stable I think it is well worth the cost, just like any other language-aware tooling, be it an Eclipse plugin, an emacs mode, or something better.

Indeed, the situation is analogous to what happens with text editors versus IDEs.  The text editor is a lowest common denominator: the same tool can handle any programming language, and many other things as well. The IDE needs to be tuned extensively to each and every language, but in the end can give you a better experience. It took a long time for people to appreciate IDEs with their language specific support for editing, and to this day not everyone does. I imagine we’ll see a similar evolution in the area of source control.

Most intriguing to me is the connection to the general problem of synchronizing data, including programs (being a special case of data), across the network.  In the past, I’ve discussed the idea of objects as software services and full-service computing. I see source control as just a special case of that. Something to discuss another time.

The story doesn’t ned here of course. There is a need for mathematical models and theory, and bindings of many languages to many SCMs. In general, more researchers should look at source control; no doubt they will have their own ideas. I hope we move the world a little bit forward, beyond files and text diffs.