Based on my understanding, some of the details he gave about the Spyglass/Microsoft situation are not quite right, but I don't think it would appropriate for me to provide specific corrections.
However, since I was the Project Lead for the Spyglass browser team, there is one correction I can offer: We licensed the Mosaic code, but we never used any of it. Spyglass Mosaic was written from scratch.
In big picture terms, Marc's recollections look essentially correct, and he even shared a couple of credible-looking tidbits that I didn't know.
It was a crazy time. Netscape beat us, but I remember my boss observing that we beat everyone who didn't outspend us by a favor of five. I didn't get mega-rich or mega-famous like Marc (deservedly) did, but I learned a lot, and I remain thankful to have been involved in the story.
In ~1997ish, the company I was soon to work for licensed Spyglass for use in our Internet-over-cable-TV startup, WorldGate. We ran the browsers in the headend, eventually on custom-designed laptop-chipset-based blades, 10 to a 2U chassis, with 10-20 browser instances running on each blade. (No commercial blades existed back then.) We compressed the screen images and sent them down to settops, with user input via IR keyboards and remotes being sent back up to the headend.
I was hired in Sept 1998 to work on the browser; we had built our own Javascript engine to add to it (since that was kinda required for the web by then). I rewrote all the table code, because it just really didn't work well when you had "too few" horizontal pixels, especially if table widths were expressed in things like %. In the end, after a major redesign of all the table code, it did better than Netscape did in the 'hard' cases.
However, before long, it became apparent with all the additions being made as part of HTML4 that sticking with Spyglass-derived code and trying to update it ourselves to compatibly implement HTML4 (or enough of it) was going to be a herculean effort for a small company (max ~350 people and briefly a $1B valuation (1999), but only around 5 or 10 people max on the browser, including the JS engine.
Given that, I made the decision in late 1999/early 2000 to switch us to the upcoming Mozilla open-source browser, and got deeply involved. The Internet-over-cable-TV part of the company failed (cable companies had other priorities, like breaking TVGuide's patent monopoly, which they paid us to do for them), and we moved onto other markets (hardware videophones) not involving browsers in 2003. I stayed involved peripherally in Mozilla, and when WorldGate dissolved in 2011 I joined Mozilla fulltime to lead the WebRTC effort.
The Spyglass internal architecture seemed at the time to be pretty reasonable compared to what I knew of the NCSA code.
Eric, I remember reading your Browser Wars web blog about a decade ago, and this posting caused me to jump back to the source material.
While Marc recounts that Microsoft offered for Spyglass to sell "Microsoft Mosaic" as an add-on while still offering your own independent version - despite MSFT eventually making its own browser free anyway - is there anything within that part of the larger story that you would elucidate to tell differently, or clarify deeper into its weeds? It was always one of the parts of the story that was more glossed over.
I started at NCSA about eight months after Marc left. What I recall of this time is that the management at NCSA found the Microsoft folks so abrasive that they got fed up and told them to talk to Spyglass.
I can’t recall the exact timing of when NCSA ceded all sublicensing rights to Spyglass. It may have been after that experience or a relief that they could send MS away in good conscience.
I don't remember anything about "Microsoft Mosaic" as a name, but we definitely retained the right for Spyglass to sell our own browsers.
In my recollection, the initial payment from Microsoft to Spyglass was higher than what Marc said, but I'm not sure.
But I am sure that the deal was later renegotiated at a substantially higher number.
I'm also pretty sure that even after that rework of the terms, Spyglass didn't get enough from Microsoft to compensate for the fact that Microsoft, er, you know, killed the browser business. And insofar as that is the essence of Marc's point, I agree with it.
"The Microsoft guys call Spyglass and they're like, yeah, we want to license Spyglass Mosaic so we can build it into Windows. The Spyglass guys say, yeah, that sounds great. Basically, how much per copy are you going to pay us for that? Microsoft says, you don't understand, we're going to pay you a flat fee, which is the same thing that Microsoft did when they originally licensed DOS way back when. But Microsoft said, basically, or at least my understanding of what Microsoft said was, don't worry about it. We're going to sell it as an add-on to Windows. We'll have Microsoft Mosaic and then you'll still have Spyglass Mosaic and you can sell it on other operating systems or compete with us or whatever, do whatever you want."
I'm finding switch expressions filling that gap a lot lately. I've also picked up at some point a bad habit in "borrowing" JS' ugly IIFE pattern in C#.
var x = (() =>
{
var z = whatever;
...
return value;
})();
It is not the most performant way to write that code and if the contents start to get long, refactoring to its own method starts to feel more likely, but in one-off cases, it seems fine for how ugly it is.
Back when I wrote this, I kinda hoped F# would surprise me and gain more traction than I expected. But 8 years later, if anything, it seems like the dominance of C# in the .NET ecosystem has grown.
F# is still a great language, but the main fact hasn't changed: C# isn't bad enough for F# to thrive.
F# always struck me as one of the most terribly underrated languages. I'm a lover of MLs in general, but F# lands on one of the sweet spots in PL space with ample expressive power without being prone to floating off into abstraction orbit ("pragmatic functional" is the term I believe). It is basically feature complete to boot.
My theory as an outsider: F# is strongly tied to the Windows world, the corporate world, where a conservative approach is always preferable, on your tech stack and if you need to hire peons coding all day. The corporate world isn't leaving OOP anytime soon, because it's what 95% of engineers focus on, the silent majority which do not frequent HN or play with functional languages in their weekends. The corporate world runs on Java and C#.
If F# had been released in the open-source, flashy and bustling world of Linux and macOS developers, it would have had a much greater success.
I know you can run F# on Linux, but just like running Swift, it feels like outsiders I wouldn't want to bet my business on if I were a Linux-only shop (which I am), however nice it feels. Also a decade ago when it had a chance to take root, Microsoft was still the Embrace Extend Extinguish company. It's not good enough to risk it, just like I'm not gonna use SQL Server for anything.
I am admittedly biased, because although I started programming recreationally in the LAMP-stack world of mid-aughts fame, a huge portion of my professional career has been in C# and the .NET stack.
I think you are grossly overestimating the degree to which the programming language you choose to use to solve a business problem constitutes "betting your business on." How would your business fundamentally change if your first 10k lines of code was in F# as opposed to Go, or Java, or Python, or TypeScript? These are also all languages I've been paid to use, and have used in anger, and with the exception of Java were all learned on the job. This comment in general has big "M$ bad" vibes and if you take those pieces out I'm not sure what the actual criticism is (maybe there is none)?
Aside from the EEE quip, I didn't catch any "M$ bad" vide in GP's post.
I think the situation is clear-cut: until recently, you couldn't really run .net on anything else than Windows, so the only people using it were those already invested in the ecosystem.
Among the people invested in the windows ecosystem, many (most ?) are large "non-tech" companies who hire people who mostly see their jobs as a meal ticket. These people don't have the inclination (for lack of curiosity, or time, or whatever reason, doesn't matter) to look into "interesting" things. They mostly handle whatever tickets they have and call it a day. Fiddling with some language that has a different paradigm wouldn't be seen as a good use of their time on the clock by corporate, or during their time off work by themselves, since they'd rather spend that time some other way.
That's for coming in my defense. You are right. I'm not a big fan of Microsoft, but I also don't hate them.
It's pretty simple, really. I am a Linux engineer, and it is not a great investment of time and money for me to get into .NET. I knew F# was cool, but is it cool enough to want to feel a second class citizen, running it on the OS and platform it is not intended to run on? It makes no business sense at all.
> is it cool enough to want to feel a second class citizen, running it on the OS and platform it is not intended to run on?
I'm not a software engineer myself, nor a Windows person, so I don't know the specifics, but FWIW, my client runs production .Net code of the C# variety on Linux, connected to pgsql. It's some kind of web service for checking people's tickets (think airport gates where you scan your ticket to enter), so not exactly a toy, even though it's nowhere near unicorn-scale. It seems to work fine, in the sense that I've never heard anyone complain about anything related to this setup. No "works for me but is borken in prod" or "thingy is wonky on Linux but OK on Windows so have to build expensive workaround".
The devs run VisualStudio (non Code) on their Windows laptops. Code is then built and packaged by Azure Pipelines in a container and sent to AWS to be run on ECS.
But it never was a tier 1 platform during its growth. So most non-Windows devs put their focus on other platforms. There is nothing wrong with that.
I could learn .NET now, but I don't really have an interest to do so at this point; Also, the devs you talk about are on Windows, using their tier 1 IDE (Visual Studio) that only runs on Windows, which is my point exactly.
That's a fair point. Tooling is an important aspect of a language, at least for me. I don't know what the VS Code on Linux experience is like for .net.
I tried to dip my toes into F# out of curiosity, and it worked by following some tutorial and VS Code. But it did seem somewhat bare bones. Although I'll admit I'm spoiled by Rust and IntelliJ.
Working for an org who bet on a mix of scala, python, and typescript, I can tell you which languages are being bet on for the rewritten services, and which language is getting in the way of getting things done.
Using it in a context where you need to make money, it's a bad bet. Fine for academic ideas and such things, but really hard to build a business around. And the tooling, community, libs, and docs show how it just can't punch the same weight as other languages when at the end of the day you need to get shit done.
We have both Akka and http4s in use, and are migrating to http4s for those services. We need to do more things more quickly with fewer hands. TS and Python are just easier and better tooled for the majority of our (CRUD) work.
dotnet compiles in general are slow AF on macs, and F# really stood out as the slowest last time I give it a kick.
F# looks wonderful, but unless you’re already in the MS ecosystem, dotnet just feels bad and out of place. And I guess if you are already in the MS ecosystem you’re using C#.
> This comment in general has big "M$ bad" vibes and if you take those pieces out I'm not sure what the actual criticism is (maybe there is none)?
As with almost all "vibes"-related comments, this doesn't hold up. There isn't any criticism; just a positing that the sort of corporate, process-heavy companies that will major on Microsoft programming languages will be the last ones to want to try functional programming languages.
Would agree with this. I don't think the language choice, is as massive bet on the business as people think. I've seen much more niche and ancient langs without an ecosystem (no libraries, no SDK's to popular products, etc) build very profitable products. I would see these languages as a much greater risk.
As long as it has a base capability (libraries, maturity) and when people join they can be productive with it in a month or so then the risk is pretty low. For F# most .NET developers, even Node developers IMO will get used to F# relatively quickly. From my anecdotal experience with a number of languages its probably one of the easiest to onboard out of the FP langs balancing the FP methodology while trying to be practical/pragmatic. It has a large ecosystem via the .NET platform and supplements it with FP specific F# libraries where pragmatic to do so.
When it's time to scale out your team and now you're trying to hire dozens of F# developers it starts to matter a lot more. You can throw a rock and hit a Java developer. I hate the language, but finding other people who can be productive in it is trivial compared to F#.
One of the common threads among companies I've worked at which I would consider "successful" is that they don't really classify developers based on what languages they've used before. If you're a good programmer you can become a net positive in almost any language in a few weeks to a few months, and productive within the first year. Some of the worst companies I've worked for were the type who would toss a resume in the trash because they had 1 year of experience in $LANG from a few years ago and not the "have used it for 3 of the last 3 years" they wanted.
I think it depends on what you mean by "successful". Surely multi-billion dollar financial organizations are by at least some definition successful. They are a complete shit show from a tech standpoint. They are so large they cannot effectively manage specialist developer staff outside of very narrow niches. Standardization when you've got thousands of developers across hundreds of products matters. Maybe some "successful" startup can make things work when they are small. But you'll find they start to standardize when they hit real scale.
Totally agree; F# really feels like a language designed by someone who really does understand the theory, why it's important, but also wanted to make the language realistic to use in industry.
When I was at Jet and Walmart, I never really felt "limited" by F#. The the language was extremely pleasant to work with, and I think most importantly, it was opinionated. Yeah, you can write Java/C#-style OOP in F# if you really want, but it's not really encouraged by the language; the language encourages a much more Haskell/OCaml-style approach to writing software.
Even calling C# libraries wasn't too bad. MS honestly did a good job with the built-in .NET libraries, and most of them work without many (or any) issues with the native F# types. Even third-party libraries would generally "just work" without headache. .NET has some great tools for thread-safe work, and I'm particularly partial to the Concurrent collections (e.g. ConcurrentDictionary and ConcurrentBag),
I also think that F# has some of the best syntax for dealing with streams (particularly with the open source AsyncSeq package); by abusing the monadic workflow ("do notation" style) syntax, you can write code that really punches above its weight in terms of things it can handle.
Now, on the JVM side you something like Scala. Scala is fine, and there are plenty of things to love about it, but one thing I do not love about it is that it's not opinionated. This leads to a lot of "basically just Java" code in Scala, and people don't really utilize the cool features it has to offer (of which there are many!). When I've had to work with Scala, I'm always that weirdo using all the cool functional programming stuff, and everyone else on my team just writes Java without semicolons.
But the basic point of the article does make a reasonable point; part of the reason that Scala has gotten more traction is because Java is just such a frustrating language to work with. Scala isn't perfect but being "better than Java" is a pretty low bar in comparison.
C# is honestly not too bad of a language; probably my favorite of the "OOP-first" languages out there. The generics make sense, the .NET library (as stated before) is very good, lambdas work as expected instead of some bizarre spoof'd interface, there are some decent threading utils built into the language, and it's reasonably fast. Do I like F# more? Yeah, I think that the OCaml/Haskell style of programming is honestly jsut a better model, but I can totally sympathize with a .NET shop not wanting to bite the bullet on it.
Martin Odersky is just a very nice guy and I get the impression that he isn't keen on saying "no", which is how you end up with a language that allows you to use xml tags inline (no longer supported in Scala 3),
The "opinionated" Scala are the Typelevel and Zio stacks, which are very cool.
The problem with the "better Java" approach is that although it has helped Scala's growth a lot, it has also made it susceptible to Kotlin. The Scala code that doesn't use the advanced type magic can be straightforwardly rewritten in Kotlin instead. Kotlin also stops your bored developers from building neat type abstractions that no one else understands.
People who use Scala only has a "better Java" can now use Kotlin has a "better "better Java"".
Yeah, and I think that's why a language like Clojure, which is substantially more opinionated than Scala, has been relatively unphased by Kotlin. Clojure is much more niche than Scala, and the adoption has been much more of the "slow and steady" kind.
People who are writing Clojure likely aren't looking at Kotlin as an "alternative"; while they superficially occupy a similar space, I don't think Clojure has any ambitions of being a "better Java", but rather a "pretty decent lisp that runs on the JVM with some cool native data structures and good concurrency tools". I do like it better than Java, but that's because I like FP and Lisp a lot; if I needed a "better Java" right now, I would unsurprisingly probably reach for Kotlin.
Yep, Scala got a lot of attention because you could kinda write it like Java, and Java hadn't changed much in a very long time - people were looking for a "better Java" - and Clojure obviously isn't that.
Kotlin's whole point is a "better Java", so it's going to grab people who went to Scala for a "better Java". Also Java actually has a sane roadmap and methodology to get better too, so there's that now too - with the preview/incubating JEPs, people can see what is coming down the pipeline.
Yep, I don't dispute anything you said there, I think that's pretty consistent with what I said.
Clojure makes no claims of being "Java++". It's a lisp first and foremost that focuses on embracing the host platform and being broadly compatible with existing libraries and strong concurrency protections.
You can use eventlog traces, from Debug.Trace [1]. You can (traceEvent $ "look: " ++show bazinga) everywhere you need and then stare at the log to your heart content.
Not everything is tracing and debugging, sometimes you really need to output intermediate results for "normal", "production" purposes. One could still abuse Debug::Trace, but that would really be ugly.
I also object to that "everywhere". It is far easier to just dump an extra 'print' line somewhere inside a for-loop than into a `foldl (*) 1 $ map (+ 3) [17, 11, 19, 23]`. And that is an easy one...
With eventlog you have lightweight profiling and logging tool for "normal", "production" purposes. You can correlate different metrics of your program with your messages. This is not an abuse of Debug.Trace (notice the dot), it is normal state of affairs, regularly used and RTS is optimized for that use case.
I develop with Haskell professionally. That foldl example of yours is pretty rare and usually dealt with the QuickCheck [1], mother of all other quickchecks. Usually, the trace will be outside of the foldl application, but you can have it there in the foldl argument, of course.
Eventlog traces are RTS calls wrapped into unsafePerformIO, you are right. The trace part of eventlog is optimized for, well, tracing and is very, very lightweight. It is also safe from races, whereas simple unsafePerformIO (putStrLn $ "did you meant that? " ++ show (a,b,c)) is not.
In my opinion, eventlog traces make much better logging than almost anything I've seen.
Right now, developing with C++, I miss the power of Haskell's RTS.
> I develop with Haskell professionally. That foldl example of yours is pretty rare and usually dealt with the QuickCheck [1], mother of all other quickchecks. Usually, the trace will be outside of the foldl application, but you can have it there in the foldl argument, of course.
So actually not everywhere. And QuickCheck does something else entirely.
You missed the word "usually". You really, really do not need a print within the body of a loop of any tightness. But you can have it.
The foldl example of yours should be split into a property checking and controlling for expected properties of the input. The first part is done via quickcheck and second part usually is done with assertions and/or traces.
But nothing preclude you from having your trace there, inside foldl argument. This is clearly wrong place to have it, but still you can have it there.
So I repeat, you can have your traceEvents everywhere.
You're thinking of Haskell. F# was modelled after OCaml, which doesn't attract monad transformer stacks, and doesn't have a zoo of compiler extensions.
Well, they aren't actually compiler extensions but pre processor extensions (PPX).
And I would really like if OCaml would have had the possibility to add the needed PPXs names to the source file (like Haskell's compiler extensions). So as to not have to read the Dune (or whatever build system is used) file to get to know where `foo%bar` or `[@@foo]` is coming from and what is doing. But at least the usage of `ppxlib` nowadays should make PPXs "compose" aka. not stamping on each other's feet.
I haven’t used it for some time but OCaml certainly used to have a zoo of incompatible compiler extensions. Circa 2008 or so I once hit on the brilliant idea of using protobufs to get two mutually incompatible halves of an ocaml program to talk to one another only to find that required yet another compiler extension to work.
I'm pretty sure F# was modeled on both. There are some definite "Haskell-isms" in F#; if nothing else, monads are typically done in something more or less equivalent to the `do` notation (an `async` or `seq` block), for example.
The syntax superficially looks a lot like OCaml, but it doesn't do the cool stuff with OCaml functors and modules; you write it a lot more like Haskell most of the time.
Don Syme began with a port of Haskell to .Net, but SPJ convinced him that this is a bad idea, so he did choose OCaml. ("The Decision to Create F#", Page 9)
Yeah, that's the F# support for it. The interesting bit is that C# supports much nearly the same query syntax (`for item in something where item.IsInteresting select new { name = item.Name }`) and secretly supports just about any arbitrary Monad you want to write [1]. Just as C# also was one of the first languages to take something like F#'s async { } builder for async/await syntax, as a different Monad transformer (that C# also built to be duck-typable for any Monad that fits the right pattern).
LINQ and async/await alone can be heady "gateway drugs" in C# to learning functional programming fundamentals (including the deep, advanced fundamentals like Monads), and C# only seems to continue to pick up little bits of FP here and there over time.
There are definitely lots of reasons why even some of the biggest FP fans often think "C# is good enough" without needing to reach for F# in 2023. (Which is why the post here, and the comments above in this same thread lament that F#'s biggest problem is the shadow that C# casts.)
[1] Mark Seemann did an entire series of blog post on coding common Monads in both F# and C# and the C# code is very interesting at times: https://blog.ploeh.dk/archive/
I don't dispute your claims, as they are subjective. I do know that I enjoy a lot of the functional aspects to C#, but I think it's something where you need to have a real discussion with your team and decide a coding style and feature implementation for your code. If your team can't all speak the same dialect, you're going to have issues. Having seniors able to work with juniors and discuss the functional aspects, as well as when to use things link LINQ, goes a long way towards a consistent and easily understood codebase. I know not every shop has this luxury, which is why I still agree with your statement.
I learned C a long time ago, I still have the KR book,
and if I look at C code I can get a good idea of what is
happening, even though I haven't kept up with all the changes.
In C# now, two developers can write code that accomplishes
the same but might not be able to read each other's code.
I dont think that is healthy.
It is somewhat the Microsoft Office / Word approach to programming
languages.
Just Keep adding features on top of features.
F# also tried to pivot into data science of lately, only to have Microsoft themselves jumping into Python and being the entity that finally managed to convince Guido and others to invest into improving CPython's performance, and possible JIT integration.
Basically having the pivot efforts being sabotaged by the same company.
Microsoft has always been a polyglot company. They also invested heavily in R and I believe even contributed some things to Julia.
I don't think the pivot entirely failed, there's definitely a small niche for "data science, but it needs to run in .NET" and F# still to my understanding fills it well. It's a very small niche and I don't expect to hear a lot of data scientists directly training for it, but there's a lot of advantages in places that use the Azure stack, for instance, for faster/better/more integrated data science when done with F#.
F# would probably need a lot more investment in dynamic types to truly attract a lot of data scientist attention. (Though the .NET DLR still exists and could use some fresh, modern love.)
Relatedly, I appreciate a lot that Microsoft's polyglot approach helped standardize the ONNX runtime, and even if the data scientists I'm working with prefer Python or R, I can still take ONNX models they build and run them in a C# or F# library with very little sweat.
I think if Microsoft would have continued to invest in projects such as IronRuby and IronPython, we'd be much further along in integrating different paradigms in a way that feels more natural, while also continuing to grow the DLR (for both features and performance).
I am only scratching the very surface of data science, but coming from .NET 1.0 and just starting to learn Python, I'm still finding it far easier to use Python for these tasks. It's most likely just the library ecosystem, and I'm hoping that Microsoft continues to add officially supported libraries to .NET for these tasks. ML.NET feels very foreign to me compared to using other libraries in Python, even as a beginner in Python (although I have experience with various languages, but only minimal experience in functional languages, mostly F#).
I don't think it matters how good or bad C# is, Object Oriented Programming is a mess
Learning, how to use an Object System (a tree of objects/classes) is inherently hard
The current problem with F# is that it doesnt do enough to shield you from objects, it does what it can, but still to use F# effectively, you still need to learn some C# and a lot of API that basically Objects inside Objects inside Objects calling Objects calling Objects and more Objects
OOP is bad because eventually OO systems becomes too complex, OO API is intimidation
Separating Data from Behavior manages complexity better
If the only flaw in C# is knowing which method calls requires the new keyword because its a constructor, and which dont because its a factory, that is bad enough to want to avoid it
> OOP is bad because eventually OO systems becomes too complex, OO API is intimidation
This strikes me as a sort of ... reverse of survivorship bias.
You look around and see all complex systems are in OO, then you conclude that it is OO that is the cause of the complexity.
Have you considered that the non-OO designs are deficient in some way that prevents them from being used for the type of systems that you find to be examples of OO being bad?
Not that I am defending OO, I just want to know how you are differentiating between "OO produces complex systems" and "OO is used for complex systems".
> Have you considered that the non-OO designs are deficient in some way that prevents them from being used for the type of systems that you find to be examples of OO being bad?
Having shipped both significant non-OO projects and significant OO projects, their drawbacks were usually related to low adoption. In terms of code and architectural complexity, they were either comparable to OO projects (in specific situations), or better.
That being said, in most situations, language/paradigm choice were not the main drivers of project success. At worst, a bad OO codebase is a drag, not a killer, and the same is true with non-OO projects.
> Not that I am defending OO, I just want to know how you are differentiating between "OO produces complex systems" and "OO is used for complex systems".
OO definitely produces complex systems. And, let me be clear, by OO I mean the social consensus in OO circles, not the paradigm itself or the technical tools. My take is that OO circles host a cottage industry of consultancies and gurus peddling a stream of design patterns, advices, etc. which end up layering in any long-lived OO codebase and create unnecessary complexity.
>OO definitely produces complex systems. And, let me be clear, by OO I mean the social consensus in OO circles, not the paradigm itself or the technical tools. My take is that OO circles host a cottage industry of consultancies and gurus peddling a stream of design patterns, advices, etc. which end up layering in any long-lived OO codebase and create unnecessary complexity.
This right here. Every time I hear mid level dev bring up DDD I contemplate quitting and spending some time looking for a Rust or Clojure gig. Sometimes it gets so bad I think about biting the bullet and going to node.js
C# isn't a bad language, even the frameworks are taking a nice turn towards simplification (eg. ASP.NET Minimal APIs, EF direct SQL queries) but the culture it creates... LAYERS of bullshit :D
Absolutely! It is the misunderstanding and use of heavy abstraction, with "a class per file" that blows these systems into liabilities rather than solutions. Start with a low number of abstractions, as few as you can get away with given your requirements, and then only expand when the requirements change. It really doesn't matter the paradigm, it's possible to heavily abstract a functional system with various transformative functions that aren't truly needed until the data becomes more complex.
There is a whole industry peddling OO systems that are extremely abstracted for the benefit of filling chapters in a book, or producing extra pages of content in a website. I fell victim to both early on in my knowledge and even professional world, but somehow managed to follow what "felt right" and broke away from that to find an easy path forward that allowed me to use the tools I was given in the easiest way possible, and only introduce complexity when the solution was complex (not for the sake of complexity for complexity's sake).
I do feel like OOP introduces a lot of inherent overhead, not necessarily "complexity". I feel like doing anything in Java, for example, typically requires the creation of several separate files, spanning 30+ lines each, much of which is just class decorators and and the like. I do feel like often the equivalent program in something like Clojure will be much shorter, and be contained in substantially fewer files without features missing. So much of the stuff that people love about classes, interfaces, and polymorphism can be done pretty easily with replicated with basic first-class maps and multimethods.
Obviously it's not a direct apples-to-apples comparison; Clojure is an untyped language, and performance for it is admittedly generally a little more difficult to predict. But, and obviously this a sample size of one, but I do feel like my programs have less... "fluff" than the equivalent OOP languages.
But if you convert that Java to Kotlin it'll get vastly shorter, whilst still being semantically the same. OOP doesn't have to mean verbosity. Java chose that path to keep the language simple, like how Go is also very verbose but simple (Kotlin is more concise but more complex).
One of the main issues with this is that OO as practiced in C# and Java is only a very thin extract from the real OO as provided by for instance Smalltalk. And without that kind of environment you end up with the worst of both worlds, where you have an OO like interface layered on top of things that aren't really objects to begin with, because they aren't 'alive'.
Very good observation but I do wonder whether it’s the worst of both worlds or the best of both worlds - more the “eat the meat and spit out the bones” approach?
I feel this kind of argument is a bit pedantic though; when people complain about OOP, they're generally complaining about the mainstream implementations of OOP.
I don't think that people people are really considering Smalltalk's OOP style when they complain about Java OOP.
Erlang and Elixir with spawn processes (objects with their own CPU) and send / receive message passing? But they do their best to hide it in OTP behind all that handle_* boilerplate.
Chicken and egg, perhaps? OOO lives and breathes state, so complexity (defined as an excess of state consideration) seems a natural pairing, yet the overhead and complexity is increased by each in response to the necessity of the other? That is, when complexity of the problem shifts, there is a parallel increase in the complexity of the OOO solution.
The general anxiety of the movement towards functional or procedural programming in general might also be a feature of age: a young programmer eager to impress that they can juggle 8 balls effortlessly, but called upon to do the same 15 years later might admit 3 balls sufficed to begin with, and is closer to an attainable sustainable solution.
The worst part of OOP is that all the properties of an object can be a mishmash of values and are mutable. In any method, you never know if the object is in some undesirable state without checking properties within the method itself. Multiply that headache across all methods and all other classes and it becomes a mutable mess. It makes it weird that we pass around objects as types when they encapsulate so much state and logic. They aren’t really a concrete data types, they are an entire living village.
With functional languages, it tries to enforce some explicit type signatures in the function arguments so things are cleaner within the functions themselves.
This isn't a property of OOP. This is a property of poor class design. You absolutely should be designing classes such that every possible sequencing of their public methods leaves them in a valid state and maintains their invariants.
Structs have the issue you describe and they aren't really OOP.
Yes, if the first thing you do when you write a class is make a setter method for each field then you will have problems. That's not really a property of OOP.
All of these logical errors that are easy to commit are terrible because they are usually runtime bugs, not compile time.
As I think of it, I think a neat feature of OOP would be conditional methods that are only callable under specific circumstances. For example, the “Customer.SendPasswordResetEmail()” method couldn’t be called (or didn’t even exist) until I verify that the “Customer.IsEmailVerified” property is true.
Being able to add these type of annotations to methods for expected object state would help catch some logic bugs at compile time.
> The worst part of OOP is that all the properties of an object can be a mishmash of values and are mutable.
Const-ness is one of the things I really miss from C++. I could look at an object and be reasonably sure I wasn't mutating it by calling foo.length() for example.
IMHO that is of such little help and the drawbacks weigh much heavier: const-correctness spreads like a cancer (try making one thing const without having to fix a hundred other things), and often requires annoying boilerplate -- I'm no C++ expert but if these are still the best available solutions...: https://stackoverflow.com/a/123995/1073695
Sometimes I want to add one little extra thing that gets mutated in an otherwise "const" method, e.g. for debugging purposes. If the compiler doesn't let me do that because I valued the ideal of const-correctness higher than practical concerns, I know I've done something wrong.
Perhaps it depends on the code-base. I worked on a medium complexity C++ project (~300kLOC) but which used multi-threading quite heavily with shared data structures, and there was only a couple of instances where I felt it got in the way.
In the vast majority of cases it reduced my cognitive load significantly because I could just look at the method declaration and see that my code would be fine.
Yes, as always it all depends on the context and how features are used. Maybe I was just bitten too often in situation where const is particular gnarly to use. I know for sure that in many cases, such as when calling small helper functions for copying a shallow array and such, one can easily pass pointer as pointers-to-const.
However, IME there is a big problem with const for more database-y, more stationary in-memory data. This is the kind of data that is almost always going to be mutated by at least some part of the code at some point in time. There is a fundamental problem of communication between mutating code and non-mutating code (the "strstr()" function, that has to apply a const-cast hack internally to implement its interface, is a trivial example here).
As said there are certainly situations where such "communication" isn't needed, but I'm anxious about precluding the possibility in the name of const-correctness.
I feel that instead of const (or whatever static formalized description of what a function is doing), good naming is most helpful to intuit broadly what that one function was doing again.
In C at least, I've ended up leaving const almost exclusively for the cases where the data is truly const - i.e. in the .ro section of the binary, and I know for sure it won't ever have to be modified, and basically I have to apply the const qualifier lest it puts the data in the wrong section / it needs an awful cast to remove the const. The majority of those are string literals typed as "const char *".
Erlang and Elixir store state in arguments of recursively called functions, usually running in their own processes separate from the rest of the application. There is nothing in the language to enforce correctness of the state. They are generally regarded as functional languages even if they are somewhat object oriented if one thinks about their message passing as method calls to the object / process storing the state.
>If the only flaw in C# is knowing which method calls requires the new keyword because its a constructor, and which dont because its a factory, that is bad enough to want to avoid it
I'm sure this is just an example popping first out of your mind, but it seems like an oddly specific thing to mention. Specially since the answer is obvious if you know C#: the name of the method matches that of the type if and only if it is a constructor.
I won't comment on the rest of your post as my experience with F# is minimal; but I think I understand where you're coming from.
> Separating Data from Behavior manages complexity better
There's a sweet spot, and it varies. Sometimes it is difficult to find. API design can be difficult. Managing complexity is sometimes itself a complex process.
Every system if allowed to become too complex.
No single paradigm of programming is perfect for all cases.
OO is one way to structure and model a system.
No matter what language you use will end up with some form of a struct,
a set of values that belong together
Then you will have list of some structs and trees of some structs
You will almost certainly have to create list/collections/groupings of
structs. Because those are quite useful and universal
How you act on those collections is different between different idioms.
In other words you will create a model of data one way or another
and you have to maintain it / change it, as required over time.
The data structures themselves are rather often based on or more
database schema where the data will be extracted and saved.
Just like every language is able to be slow/non-performant -- but OO in this case would be Python in a web context; it doesn't invalidate that a good amount of OO codebases in the wild devolve into incomprehensible black boxes, where no one has any idea what anything does or how to make meaningful changes that fulfill the intent of (compare that to iterative programming, where you can atleast read it)
As for trees: roll your own. They're simple enough, yet tightly-coupled with context that no generic implementation exists that is flexible enough. You do not need OO to create a tree. C has been working with trees long before the current Frankensteination of OO was even a twinkle in Gosling's eye.[1]
Data structures do not need inheritance -- they might need delegation (message passing that requires you to actually think about your system).
Data structures do not need encapsulation -- they most likely need namespaces. Realistically, most classes will be used as namespaces.
Data structures do not need polymorphism -- just implement the members you need, and name them appropriately (no 5+ word phrases, please. Please!)
What modern OO does is lower the barrier to productivity in the present, and then pays for it in the future. It's no different than writing your "planet scale" backend system in JS.
[1] If you want to know why we have Java: some guys that didn't have the time to think about low-level (memory management specifically) things for their embedded applications, got sick of trying to learn C++, decided to make their own language. That's it. There was no grand plan or thoughtful design -- it's just a mismash of personal preference. The same people that described C++ as "being too complex" (fair) and using "too much memory" (lol)
What do you find insane about the C# `List` source code?
I'm not a C# programmer, but the public API looks sound, and the entire thing is like 1K LOC including docstrings (I guess the inherited code would add to that).
Instead we have to go diving through the IList, which implements ICollection, which implements IEnumerable, which implements IEnumerable (again). Just because each interface is composed of another interface, doesn't mean you aren't using inheritance. You are effectively creating a custom inheritance tree through willy-nilly composition.
It is gratuitous to make this chain so deep, when the underlying code is just a handful of lines.
The doc-strings are unnecessary. It's self-evident what most of the code does if you read it.
// Returns an enumerator for this list with the given
// permission for removal of elements. If modifications made to the list
// while an enumeration is in progress, the MoveNext and
// GetObject methods of the enumerator will throw an exception.
//
public Enumerator GetEnumerator() {
return new Enumerator(this);
}
// Returns the index of the last occurrence of a given value in a range of
// this list. The list is searched backwards, starting at the end
// and ending at the first element in the list. The elements of the list
// are compared to the given value using the Object.Equals method.
//
// This method uses the Array.LastIndexOf method to perform the
// search.
//
public int LastIndexOf(T item)
{
Contract.Ensures(Contract.Result<int>() >= -1);
Contract.Ensures(Contract.Result<int>() < Count);
if (_size == 0) { // Special case for empty list
return -1;
}
else {
return LastIndexOf(item, _size - 1, _size);
}
}
// Returns the index of the first occurrence of a given value in a range of
// this list. The list is searched forwards, starting at index
// index and upto count number of elements. The
// elements of the list are compared to the given value using the
// Object.Equals method.
//
// This method uses the Array.IndexOf method to perform the
// search.
//
public int IndexOf(T item, int index, int count) {
if (index > _size)
ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.index, ExceptionResource.ArgumentOutOfRange_Index);
if (count <0 || index > _size - count) ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.count, ExceptionResource.ArgumentOutOfRange_Count);
Contract.Ensures(Contract.Result<int>() >= -1);
Contract.Ensures(Contract.Result<int>() < Count);
Contract.EndContractBlock();
return Array.IndexOf(_items, item, index, count);
}
If you remove these 300 lines of pointless comments, you still have 900 lines of code that is terribly space-inefficient. Everything is "pretty," but slow to read, because of the immense amount of whitespace, nesting, and lines longer than 76 chars. You cannot read long swathes of code in one screenful. You have to scroll vertically and horizontally, because for some reason a standard library needs to throw exceptions (exceptions aren't free; they negatively and noticeably impact performance).
Seriously, you could just use an "out" errno/status. "But then we would have to always check to see if the operation succeeded!": exceptions make people lazy. Just because an exception wasn't thrown, doesn't mean you're doing things correctly.
Why does a List implement a search algorithm? Why binary search of all things -- because it's convenient? You know if I need a binary search, I can write one myself. Don't pollute my namespace.
// Searches a section of the list for a given element using a binary search
// algorithm. Elements of the list are compared to the search value using
// the given IComparer interface. If comparer is null, elements of
// the list are compared to the search value using the IComparable
// interface, which in that case must be implemented by all elements of the
// list and the given search value. This method assumes that the given
// section of the list is already sorted; if this is not the case, the
// result will be incorrect.
//
// The method returns the index of the given value in the list. If the
// list does not contain the given value, the method returns a negative
// integer. The bitwise complement operator (~) can be applied to a
// negative result to produce the index of the first element (if any) that
// is larger than the given search value. This is also the index at which
// the search value should be inserted into the list in order for the list
// to remain sorted.
//
// The method uses the Array.BinarySearch method to perform the
// search.
//
public int BinarySearch(int index, int count, T item, IComparer<T> comparer) {
if (index < 0)
ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.index, ExceptionResource.ArgumentOutOfRange_NeedNonNegNum);
if (count < 0)
ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.count, ExceptionResource.ArgumentOutOfRange_NeedNonNegNum);
if (_size - index < count)
ThrowHelper.ThrowArgumentException(ExceptionResource.Argument_InvalidOffLen);
Contract.Ensures(Contract.Result<int>() <= index + count);
Contract.EndContractBlock();
return Array.BinarySearch<T>(_items, index, count, item, comparer);
}
What if my list -- as is almost always the case -- is unsorted? The result will be incorrect? Looking through the chain of indirection, I cannot see any code checking to see that the list is sorted. Maybe it's there, but it's so much overhead trying to make sense of the List.BinarySearch -> Array.BinarySearch -> ArraySortHelper<T>.Default.BinarySearch -> arraysorthelper.BinarySearch -> arraysorthelper.InternalBinarySearch chain. So I'm going to silently get a wrong result, and the only way to know is to read the docstrings? Thanks.
As far as I can tell, it's unoptimized. It's just plain, OO C# meant to be readable. I don't see any tricks or tweaks to get the IL to be more concise/performant. Maybe the compiler is aggressively optimized for the core lib (but I'm not holding my breath -- because I can't see it).
I stopped using C# & F# almost a decade ago, but there are some relevant pieces of information that answer your questions:
1. Optimization is primarily handled by the .Net JIT, not the C# compiler. That allows F#, C#, VB.Net and other runtime languages to share similar performance characteristics without duplicating effort.
2. Docstrings are used by the IDE to help the user. That avoids the need to read the source code itself for regular usage.
3. When comparing the .Net List<> implementation against any C++ std::vector implementation, the former looks quite tame in comparison...
I empathize with your characterization of "a tree of object/classes" and I yearn for an example of how else to model a complex, domain-specific system not using the aforementioned tree.
Not the author of the comment, but based on how I understand the comment, I feel essentially the same way.
I would characterize it a bit differently, seeing as, for example (and to your point), a purely functional lisp program is a tree of lambdas and macros. The same could be said of Haskell.
For me the issue is that classes and objects are actually pretty complicated things for what they are. It’s easy to not notice when you’re in the habit of using them, but really pause and think about how complicated they are. They have both structure and machinery that probably aren’t required for most abstractions: regardless, in OOP they get shoehorned into every problem.
This is why OOP ends up with a bunch of well known design patterns, whereas in FP they’re not reaaaaally a thing (arguably).
A tree of functions is probably the simplest possible way to build programs, at a fundamental level: I am not speaking in terms of individual preferences here, but really mathematical simplicity.
well you see! What we can do is to namespace our functions, e.g. by naming them component_create, component_add_button, etc. We then create a plain dictionary with key value pairs that gets passed onto these functions! The functions then possibly return a new map, which is a modified map! This allows us to write code like
dog = dog_create({name: "foo", age: 12})
dog = dod_add_friend(dog2)
print(dog["friends"])
I'm not seeing that in the example, and I'm not even seeing anything very relevant to FP in the example either. I guess there isn't much mutation happening, and functions are called? But that's not what FP is.
This tells me that you never really looked at functional languages, not even used them.
The power of ADT, especially when using a comprehensive pattern matching expression, is pretty difficult to emulate in the OOP world without a ton of code.
But in this extremely simple case you just need a record.
let dogBar = {Name = “bar”; Age = 11; Friends = []}
let dogFoo = {Name = “foo”; Age = 12; Friends = [dogBar]}
printfn “%A” dogFoo.Friends
The advantage is that it’s immutable and it’s guaranteed to don’t have null in any fields.
C# only introduced records recently, while F# was born with them.
And C# still hasn’t got ADT because it’s missing the Union types as far as I remember.
It's not a tree though.
A tree doesn't have connected leaves and branches. This is, however, common with classes that might get injected the same dependency
Sounds like missing the tree for the forest. Im not from a pure cs background (so forgive my mangling of terms) but isnt a tree essentially an acyclic graph with constraints, 1 parent 2 children for example? What you're describing is adding some cycles into that graph no?
The number of children can be anything, it's two children for a binary tree.
Each node except one node must only have one parent, which isn't true if two or more nodes share one or more children.
And, yes, in theory this adds cycles which aren't allowed.
However, since class dependencies are better represented as directed connections (which aren't usually used for trees in CS terms), it isn't a true cycle.
relational model, like we always did and do everyday (in the db realm)
i am not saying we should not use trees ever, i am mainly saying, when the model is a very deep tree (or several deep trees and trees everywhere), its becomes overly complex
data models should be as flat as possible , and only nested when absolutely necessary
Yes, and my yearning was for examples in which the domain objects are complex systems or machines themselves.
To your point, if the domain is a payment system, I can keep separate db's of Customer Info, Customer Purchases, Transaction Instances, Customer payment methods, etc. This seems like a domain suitable for functional code.
If the domain is a two stage orbital rocket, in which we must have a stateful system that has internal feedback loops (fuel consumption, vehicle trim, time of flight, time before stage separation, engine sensor data), our best software design is an object graph which causes spaghetti code ( does the navigation system belong to the electrical system, or the radio system? Wait, does the radio system belong to the electrical system? Wait, does the entire electrical system belong to the solid fuel system, since the electrical system is dependent on the generators partially, but what about the battery system? What critical components stay on the battery system if the generators are shut down?). I guess my point is, real life is a spaghetti relationship.
Consider the recent ISpace probe crash. The article says "software bug" but in reality it's more of a 'design flaw' and I would bet it's exactly because of the topic of this thread. The sensors were reading correct data, but the design/validation of the intercommunication data between sensors was designed wrong.
Documents are pretty much everywhere. In many cases they are mutable because user needs to edit them, and on the web JavaScript code needs to dynamically modify them.
According to debugging tools in my web browser, your <div class=comment> is at level #15 under the <body> element. I wonder how would you model the in-memory representation of this web page, while keeping the model practical?
The big difference between C# and F# styles (yes you can do either style in both languages, but with varying degrees of friction) is if that tree is mutable or immutable.
F# (and ocaml, on which it was modelled) are oop languages. If you don’t like oop there are functional programming languages that might be better fit for you.
> F# is still a great language, but the main fact hasn't changed: C# isn't bad enough for F# to thrive.
That's right. I mostly switched to writing "dumb records + service classes" code in C#, and while F# is terser, there's just not enough pain to cause me to switch. When DU's come to C#, the gap will get even narrower.
I used to write lots of C#, but now I consider it a bad ecosystem. The problem is the amount of ceremony, silly OOP abstractions, dependency injection, etc. Just look at building a simple HTTP endpoint in C# compared to Node or even F#!
Yeah, you can get away with minimal C# stuff if you want. Mind you minimal API's is a relatively new thing as of NetCore 6 so GP might not have had the chance to touch these new things yet.
Much of the older overdesigned pain of C# is that it used to be tied to how IIS wanted things to work, NetCore initially pivoted more to DI stuff before pivoting again to these minimal API's but the DI stuff is definitely available(and used) still.
For pragmatic choices it's all there, you can start off with minimal API's and get far with it and once you start feeling pain-points as you want to re-implement things it might be time to add in the more "frameworky" parts.
You can also use DI without having to use interfaces for everything. It's easy and possible to inject straight dependencies from concrete classes into constructors, without all the abstraction (beyond allowing constructors being called for your dependencies and injected into your controllers). I'm a big fan of using services with an unabstracted database context from EF Core, rather than ever making use of repositories. I can still go from a MVC project with concrete views, over to Web API with a completely JavaScript frontend framework, without needing to change any of my business logic. Approximately 10 years ago, I was lost in all the abstraction you'd find in tutorials and any book on ASP.NET MVC, but experience has taught me a LOT, as well as working with other languages and seeing the (lack) of ceremony needed to get things done.
By "used to write", I'm guessing maybe they worked in the .NET Framework/IIS era, which did have a cognitive cliff to climb. You could get used to the ceremony though and then your brain started to ignore it and focus on the stuff that mattered. These days it's much easier though.
>> The problem is the amount of ceremony, silly OOP abstractions, dependency injection, etc.
Your code snippet certainly has a lot of unnecessary ceremony. Why use a builder object at all? Why use a static class with a function to build the builder object?
There is a lot that gets done behind the scenes in createBuilder(). I understand where you're coming from, but this allows you to override any defaults that you don't like, in order to provide your own. I personally still stick to the standard MVC pattern, and don't go crazy with abstractions. I place my business logic within services and inject those in my controllers, but if you were to run a debugger, you would not have to jump through interfaces and other useless abstractions that were a thing of the past (and present if you follow current tutorials and books). I have used Node.JS, and still use it to provide my frontend developers with an environment using Express to build out templates using Gulp for minification/transpilation/compression for use in Umbraco (a .NET Core CMS). My frontend developers don't need to know C#, and can work in standard EJS templates and HTML, but benefit from SCSS and modern JavaScript. I can then build out the Razor syntax for views, and just drop their CSS and JS files directly into the CMS projects.
Your "career calculus" article has been top of mind for me recently as I've talked about it a bunch of people. Amusing how those core concepts don't change much.
Also, you correctly anticipated that Swift would become mainstream long before F#, which happened. Of course hindsight is 20/20, but this wasn't that obvious back in 2015. Your reasoning was sound.
> F# is still a great language, but the main fact hasn't changed: C# isn't bad enough for F# to thrive.
C# will always be more popular because it easier to learn. Why? Because it looks familiar to most developers. Why would you learn this unfamiliar thing called F# if C# is right there and you basically already know it? On top of that, C# almost has feature parity with F#.
However, F# is a simpler language than C#. That is a fact. It has less concepts that you need to learn. I've found that onboarding someone in an F# codebase takes a lot less time compared to onboarding someone in a typescript,C#,... codebase. A lot less time. I've found that new people can start contributing after a single introduction. The things they build often just work.
I think that an F# code base costs a lot less money to maintain over longer periods of time. Can't prove it but I think that the difference is huge.
Would be interesting to see actual stats in F# usage which I doubt are relatively available. Given the reaction to this post from what is an old article there's still probably an underground interest in the language and some use in general. People seem to have built strong views on it either way. Especially with some posters admitting they use it professionally with a closed source culture (finance, insurance, etc). Most metrics would not be accurate given interoperability with C# - e.g Google searching I would typically look up C# code and port it for example.
Maybe it doesn't need to thrive for everyone; maybe it just needs to continue being useful for the people who employ it and add value. That's probably OK. They could be just busy building stuff instead of blogging, especially if the community is mostly compromised of senior developers (10 years +).
I think another problem was that Microsoft downplayed F#. They didn't support it in SSIS packages, or fully support it in MVC projects. Those were the main things I did back then. I really wanted to use F# and had the freedom to do so, but had to conclude I'd be faster sticking with C# than switching back and forth.
Why? because Linq is basically just syntactic sugar for regular IEnumerable methods, while discriminated unions have no equivalent at all.
Even if you wanted to claim that those IEnumerable methods ARE linq, then it would still be possible to implement them with a library while discriminated unions have to be a compiler feature.
The C# compiler "duck types" LINQ so you can already (ab)use LINQ for general computation in C#. You can use nearly any Monad you want with LINQ syntax. It isn't always a strong fit for some types of Monads, but it is more capable than it seems. You might get some funny looks if you do, though.
(Similar with async/await: it is "duck typed" at compile time so you can write other Monads for that, if they make more sense in that form of transformer than LINQ. Or support both LINQ and async/await together.)
There's definitely some more interesting power in F#'s Computation Expressions that can't easily be done even with (ab)using the tools that already exist like that, but it is still interesting what can be done with the existing tools.
I really wanted to learn it, but I wanted to learn F# & not C#. The problem is...you can't really learn F# without knowing .NET and how it does all the OO stuff. Even the most basic things that require one easily googleable line in Python would return no results for F#. You just have to figure it out in C# and then you can apply to F#.
Most of the buzz about .NET Native AOT is focused on things like startup time for compiled executables in cloud environments. For good reason.
But Native AOT also supports compilation to libraries with a C ABI, including both shared libs and static. My blog series tends to lean in that direction, talking about interoperability.
Some of the posts talk about very fundamental things. Some of the later posts give mention to a (somewhat experimental) binding generator I've been working on, using CLR metadata to automatically create glue and projections for other languages like Rust and TypeScript.
In general, interop between C# and other languages has been possible for a long time, but Native AOT allows it to be done without hosting a CLR, and as the feature matures, I think that'll make it more interesting for some use cases.
I like the warnings about style and naming conventions. I kinda wish there were more of them. These warnings can help teams avoid arguments about things that don't really matter very much.
However, since I was the Project Lead for the Spyglass browser team, there is one correction I can offer: We licensed the Mosaic code, but we never used any of it. Spyglass Mosaic was written from scratch.
In big picture terms, Marc's recollections look essentially correct, and he even shared a couple of credible-looking tidbits that I didn't know.
It was a crazy time. Netscape beat us, but I remember my boss observing that we beat everyone who didn't outspend us by a favor of five. I didn't get mega-rich or mega-famous like Marc (deservedly) did, but I learned a lot, and I remain thankful to have been involved in the story.