Hacker Newsnew | past | comments | ask | show | jobs | submit | jonathanedwards's commentslogin

I thought a benefit of ES module was allowing cyclic dependencies, but I'm not reading any discussion of that.


Just poor writing. I've talked more about this on my blog. Open source usually only works for tools for hackers. I want to reach the rest of humanity.


Yeah, I've gotten a lot of pushback on that. One approach I've seen is to reject expressions that would violate standard precedence, and require extra parentheses.


Strong agree. The syntax in this work was meant to be mostly hidden from the end user. I've found that syntax is still really useful for language design and implementation. I stopped before building the graphical programming environment on top.


Designing a productive graphical programming environment is very challenging. I think it is probably at least as difficult as designing the underlying language mechanics. I'd love to see any experiments you might be able to share.


Yeah that didn't happen. I stopped this work before I got to reimplementing the UI. I worked out some problems I've been obsessing about forever, but the result is just too complex and alien. Back to the drawing board. Thanks for your encouragement all these years.


Any significant improvement will necessarily be a significant change and therefore alien.

Computers were alien, and the culture projected all its fears onto them: rightists churned out endless movies about slave revolts with computers and robots in the role of slaves, while leftists tried to bomb and burn down computer centers as oppressive instruments of the System.

FORTRAN and COBOL were alien to experienced programmers, who seriously doubted how optimal the output assembly would be.

VisiCalc was alien to Fortran programmers. You're still pushing back against the remnants of that rejection today.

MacOS in 01984 was alien to those of us who'd grown up writing BASIC. It provoked hostility, too!

Pascal was alien to me because it didn't have line numbers.

Online chatting was alien. We called ourselves "geeks".

I still have anti-JS JS from 02000 on http://canonical.org/~kragen/hotlist.html because JS was alien and I hated it.

When we pioneered AJAX and Comet later that year at KnowNow (one of three independent inventions), it was alien, and in fact it took four years before Gmail took AJAX mainstream; Comet took even longer.

So don't worry about being alien. You're shooting for a revolutionary change, not an incremental improvement. Revolutionary changes are difficult under the best of circumstances, but they are impossible to achieve without being alien. Your weirdness budget isn't unlimited but it's necessarily larger than for an incremental change.

I know it would be a headache to achieve it, but it would be pretty useful to have an open-source runnable version of any of the older Subtext prototypes.


Well said. You know, sometime soon I need to make a stand. My gut check was that this wasn't the hill I'm willing to die on.


I didn't know of the existence of Subtext, and I've felt impressed by the core set of ideas behind it (even if at first I was underwhelmed when first looked at how it was licensed, sorry about that). I've seen recently many other converging to the same region of the design space, but your approach hits all the relevant points on what I would expect on a system for a next-generation style of computing.

I have some ideas on my own on building a new style of computing environment that doesn't feel alien, and I've arrived to a lot of the same conclusions that you describe in the "Notable features" section.

My approach is to focus the environment in content-editing first (i.e. making it feel like a "structured content editor" more than a IDE), and representing code primarily with a metaphor of content transformation over collections (think Google's OpenRefine or MS Power BI), rather than as processes running on an application.

I have a core idea at the base of the structure for this system, which is to reify function applications as a named object in the environment, much like your idea of treating function results as collections of values accessible as raw data. This reified object, which I call a wit for memorability, acts very much like items in your Subtext, except that:

* Each wit has a unique identifier (a 'label') which can be used to annotate data directly through direct manipulation, like giving a name to a cell in a spreadsheet. The user can build collections by applying the same tag to all the values in the collection, manually or supported by a PbE process. The wit represents the data structure containing all those values; but at the same time it may represent the process used to generate them.

* There is no distinction among scalars and collections. I think this is the most unique feature of my system; you may apply a function to a wit (identified by its label), and it gets lazily applied to all its values, thus creating a new collection of transformed values. This is similar to named ranges in a spreadsheet, but it doesn't depend on a positional grid. And single values are treated the same as collections, so there's no need for loop constructs; the user can simply build templates with wits as variables, which then get applied to all their possible values in their input domain. (Microsoft has a similar feature with their research on FlashFill, which was posted here a few days ago[1]).

* Wits may be used as identifiers of attributes, to extract content from records. That content may be previously stored values or computed on the fly from their formula and current inputs (just like in your "Subtext is both a PL and a DB" line). I.e. wits can represent relations among entities, not just code applied over data. This blurs the metaphors of function application and data structure, which I believe is a good thing for easy-to-use computational environments.

* Since code is seen as data collections, you can build repeatable behaviors by changing the input on a sequence of transformations. For example, recorded recent user actions are not seen as a sequence of code steps, but a sequence of intermediate processed states of the transformed collection (with each step being itself represented by a wit, and thus accessible as data). I think you have a similar idea with your "functions seen as expanding to traces", but I envision using the intermediate steps as possible inputs to new functions (facilitating creation of pipes and recursion by demonstration).

I've felt my ideas validated when seeing that you've created a system which has evolved to a very similar design space; your Feedback mechanism is a neat solution to the representation of interaction and user updates, for which I didn't have a concrete solution (other than the abstract idea of representing all state as updates on functional-reactive streams).

Unfortunately my design process involves primarily paper prototypes and lots of concept notes; and also I don't get paid for it, so there's very little in a shareable form. Do you have a public discussion forum where these ideas can be discussed in the open? It would be good to share my detailed concepts and see how they apply to concrete use cases that you may have in mind for building your system.

[1] https://news.ycombinator.com/item?id=28548165 "Story of the Flash Fill Feature in Excel"


Interesting. Check out the futureofcoding.org slack. There are some workshops that take early stage ideas, like LIVE and PX. Keep working on your ideas and try to pull them together into a coherent explanation. I'd be happy to take a look and give you some feedback.


Thanks! I'll do that.


It sounds super interesting.


Thanks! I guess I'll clean up my notes and post them on futureofcoding.org community through the share-your-work section, to explain all the details and possibilities.

I took inspiration from this [1] classic article by Henry Lieberman, where tags are applied to elements of a composite as a way to edit its structure; and I combined it with the idea of a templating system to process or generate items in bulk (all templating systems do this, my idea is to abstract it into a generalized, always present interaction mechanism).

[1] http://acypher.com/wwid/Chapters/23ProblemSolving.html Making Programming Accessible to Visual Problem Solvers


Neat!

I liked that book so much I bought it in paper form; also YWIMC. Lieberman's work in particular has always been inspirational. I think he sits down the hall from Jonathan, but I could be misremembering.

This morning I was working on a problem that I was thinking would be ideal for a system like this: I was taking notes in prose about potential blowing agents for waterglass or other geopolymer foams—things that produce gas when the polymer is heated to its softening point. For example, chalk, ammonium chloride, sulfur, aluminum hydroxide, sal mirabilis, saltpeter, or carbon plus silica. Each of these blowing agents produces a certain number of moles of gas per mole of blowing agent, has a certain molar mass, a certain density, a certain cost, and a certain activation temperature, plus certain material-compatibility concerns—you can't disperse hydrated sal mirabilis in an aqueous solution, for example, or chalk in an acid solution, if you want them to be intact in the unblown geopolymer preform. These are "givens" that can be found in a literature search or, in some cases, a search on MercadoLibre.

In the prose document, I would like to be able to tag the gas production per mole, molar mass, density, activation temperature, and material compatibility in such a way that I can render a table of the candidate materials in the document itself, and then sort it by gas production per mass, per volume, or per dollar, and filter it by activation temperature. Gas production per mass is calculated from molar mass and molar gas production; gas production per volume is calculated from gas production per mass and density, and similarly for gas production per dollar.

Is that the kind of thing you're thinking of, or am I imagining wits very differently from what you really have in mind? I look forward to reading your notes! (I'm not in the Slack. I hate Slack. It's ransomware.)

A step beyond would be to type the reaction equations into the prose document and calculate the gas production per mole from that, and use a database of atomic weights to calculate the molar mass from that as well. With a database of the formation entropies and enthalpies of the compounds, I could not only calculate how endothermic the decomposition is (or, rarely, exothermic) but plot an Ellingham diagram as well, which would be pretty useful as a double-check on the sanity of the reported activation temperatures from the littrachaw. Together with combinatorial search functionality, it might also enable me to anticipate problematic side reactions, or how the activation temperatures would shift if the foaming was carried out at 1 MPa.

Some of the quantities I'm calculating on have units attached, such as temperature and density, and some have intervals of uncertainty, such as price, and it would be useful to have the computer track those things instead of the human. Confusing a Gibbs free energy in kJ/mol with kJ/kg can be a problem; similarly gas production per gram or per cubic centimeter.

Some of Jonathan's work on Subtext has focused on variants of or alternatives to test-first programming; in spreadsheets people sometimes do test-first programming (for things like the equilibrium-temperature calculation I mentioned above, for example) and then delete the tests. In a context like the blowing-agent survey, being able to hide them instead would be better.

Finally, ideally, I'd like to be able to transclude at least the givens and possibly the calculation results in other notes: "According to my notes in [0], aluminum hydroxide is stable at atmospheric pressure up to 220°, ..." perhaps eventually refactoring those inputs into a centralized database so that I don't have to remember which note has the boiling point of sulfur in it.

The calculation part would be easy to do in Gnumeric or localc, but those handle the prose, units, and intervals very poorly at best; and they would be hopeless for the equation parsing part and, at best, clumsy for the database queries and Ellingham-diagram plotting.


> In the prose document, I would like to be able to tag the gas production per mole, molar mass, density, activation temperature, and material compatibility in such a way that I can render a table of the candidate materials in the document itself, and then sort it by gas production per mass, per volume, or per dollar, and filter it by activation temperature.

Yes, that seems be a very direct application of my wit concept, with one wit for each magnitude, placing each wit as a column of the table and having one row for each material. In the end, a wit is basically like a user-friendly version of a typed pointer in C, with a 'visual' GUI for combining them like an Excel pivot table or a OLAP 'data cube'. You can read the value at the end, or you can iterate it over a collection to process a list of values; the meaning is located in the code that uses it, more than in the object itself, and you can share the reference to the same value in many different places.

Of course there are already several systems where you can define a data structure in code and show it rendered as labels on a document (see idyll[1], or the original Explorable Explanations), but I have the idea that using such system shouldn't require any code for basic or moderately-complex programs:

1) the user should be able to define the data structure through direct interaction, like on a spreadsheet (e.g. Airtable does this fairly well). But the raw data could be extracted from existing documents, like a web scrapper, with the scrapping expressions defined through point-and-click over the actual data. The now extict kimonolabs[2] had a good interaction model for this part. **

2) this structured data could be transferred between applications with a copy/paste-like universal protocol (most current applications require coding against APIs for this),

3) calculations can be defined and re-applied to new data collections through simple formulas and more direct manipulation, possibly with Programming-by-example assistance tools. The people from MS Excel are doing this, but they lack the universal API - everything happens inside Microsoft applications.

For your example, you could store your database of atomic weights as "hidden wits" somewhere else, extract the desired value by naming the label (wit labels are global variables under a particular namespace), and feed its values to the expression where you want to calculate a function application with the data.

> The calculation part would be easy to do in Gnumeric or localc, but those handle the prose, units, and intervals very poorly at best; and they would be hopeless for the equation parsing part and, at best, clumsy for the database queries and Ellingham-diagram plotting.

The good thing of wits is that you could chain them and transfer information between applications, with each app handling what it does well, and returning the data back to the origin; i.e. any workflow that you could do manually with copy-paste. In this sense it's like a Unix pipe, but in a visual environment and transferring structured data (which can be de-composed at the receiving end, extracting one wit at a time). They are also like those awkward connector lines in dataflow visual languages, except that instead of a line you have a label with the wit name at the origin and destination.

Again MS has Power Automate as a tool with these capabilities (I think its way more complex and powerful than the classic Apple Automator), but again Power Automate restricts you to a closed runtime environment of applications that support it, which you glue only through code.

> Is that the kind of thing you're thinking of, or am I imagining wits very differently from what you really have in mind? I look forward to reading your notes! (I'm not in the Slack. I hate Slack. It's ransomware.)

I don't like walled gardens either; I've heard Matrix.org and Element.io are lovely this time of the year... :-P

I've bought the domain http://witclay.com, though I never managed to clean up my notes and publish them over there. If I manage to gather an audience in that group, I will probably summon the will to publish my use cases over there so that I can show them in the few next months, providing some concrete examples that are more intelligible than just abstract ideas.

[1] https://idyll-lang.org/docs

[2] https://youtu.be/RS3ckeDIOmE?t=157 Kimono on tables

https://youtu.be/jQ_hSNj1gI0?t=49 Kimono on structured text

** The wits here would be the yellow bubbles with the extracted data. Again, many of my insights I've seen realized in other systems; my idea is an orchestration of all them into a single user-centric system that made all them compatible, at the user fingertips.


I look forward to seeing the interaction model you come up with! Or your notes!


I'm reading through The Lab Notes from Alexander Obenauer[1], and I'm noticing lots of coincidences. Universal data portability [2] in particular shares the same code insight of reusing all user data in any context where it's needed, and Universal reference containers [3] are the primary use case for my idea of Wit.

With the example interactions in that website you can get very close to what I imagined my system to be (though I would use a "select+tag" metaphor more often than the cumbersome drag&drop).

I still have the idea to allow spreadsheet-like processing on top of those metaphors, but the essence of the system is quite similar to that one; many information systems are converging in that same direction toward a similar model of interaction.

[1] https://alexanderobenauer.com/labnotes/000/

[2] https://alexanderobenauer.com/labnotes/002/

[3] https://alexanderobenauer.com/labnotes/003/


Please take this as Computer Science Fiction


constructive feedback, Jonathan:

Things like

    // passing values through check
    x do {
      check >? 0
      check even?()
      + 1
    }
appear to be in conflict with stated goal of

"It is an interactive medium for data processing by people who don’t want to learn how to program. For example, scientists who want to process data without becoming “data scientists”. Or people who want to build simple custom apps without learning to program."

[This fighting text with text is indeed a CSFi soap opera :)]


It says at the beginning that the actual UI wouldn’t be text based. Though, it seems like that leaves an open question of how to represent this?


Thanks yeah it's got a long way to go.


And then should "live programming", where the program is running while edited, become "gramming"?


Oh hi! What a pleasant surprise to see you on this thread :)

I guess what I was thinking was that when you're doing that, you're largely "postgramming". Some of the recent versions of Subtext that cater to test-first development are more "programming" in the etymological sense, but the "program" in that case is what we'd normally call a test suite—although other recent versions of Subtext turn the execution trace, complete with recorded results, into the test suite, which is more a question of "postgramming".


How did I miss this? Thank you!


As a geezer with glasses, I find my laptop screen works better than a monitor. The laptop screen is down in the reading zone of my progressive lenses, yet I can still look up out the window. A big monitor requires me to use glasses without any distance vision, making me feel like I'm trapped in a fishbowl.

I also love the fact that my window layout is exactly the same whether I am working at my desk or a coffeeshop. Or at least when I used to work at coffeeshops...


We are organizing a workshop using Illich's concept of conviviality to discuss the future of software. https://2020.programming-conference.org/home/salon-2020#Call...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: