Hacker Newsnew | past | comments | ask | show | jobs | submit | crq-yml's commentslogin

I'm pretty sure that it's not exactly about the code, it's a case of having honed skills and techniques from multiple different sources - John Romero was bouncing around the industry and working on both larger and smaller productions, multiplatform ports, and different approaches to engine/content(he got his hands on both Origin's and Infocom's stuff, as well as a few other places) - the number of references he brought to the table could not be underestimated. John Carmack didn't have that same experience but would have been able to take a description from Romero of "at Origin we did it like this" and aim to make a very efficient version of it - his growth into borrowing academic research for inspiration came a little later. And there was also the early influence of Tom Hall who was older, able to communicate what he wanted as a producer and probably steered the programming team away from wrong turns a few times.

When you have the experience, you already know how long it takes to implement the majority of the game, when we're speaking of these early 2D games using bitmaps, tiles, small animations and some monospace text. The gameplay code is game-jam sized in most instances, so a majority of it was I/O code and asset pipelines. You can chart a safe course to get through one tiny project, and then another, and another, and build a best-of the routines that worked. The coding style would be assembly-like at this time even if they were using C - no deep callstacks, mostly imperative "load and store", which allows for a lower level form of reuse than is typical these days by breaking down the larger algorithm into "load", "mutate", "mutate", "mutate", "store" each as separate routines. So you end up with some tight code when you get to run it through a lot of projects. Softdisk provided the opportunity for building that and getting paid.


I think the gap comes from the collective/individual divide found elsewhere in Japan/US comparisons. It's just a bit less obvious w/r to art.

In Japan there is a presumed collective endeavor to creativity. That starts in school and continues into the professional world: mangaka will plagiarize from each other in the pursuit of a collective storytelling lamguage (a concept introduced to me by Even A Monkey Can Draw Manga, a great humorous short read on simple realities of the industry with practical advice). Someone who makes a bad drawing is given a lot of leeway to be "pulled back in line", for better or or worse. The professionals complain that everyone copies from everyone else overly much, and the pressure at the top level to continuously put out high level work is deadly intense, but it creates the high standard of uniformity.

But the US culture guarantees a lot of awkward standoffish scenarios because, if you make art, it's positioned relative to the worst framing of your ambition, and this typically means you are viewed as a speculator, someone who is plotting a way to cash in without doing something for others. It's far more acceptable to say that you are an art teacher than an artist because then it locates you within the structure of the firm and the state, which is the "hidden" collective tendency in US culture: be as individual as you want if it builds the nation in balance sheet terms, otherwise you are a failure. Thus the observation from earlier in the thread that a sports fan is more deserving of respect than an amateur athlete - the fan is a consumer, they are participating in the market.


There's a cruel truth to electing to use any dependency for a game, in that all of it may or may not be a placeholder for the final design. If the code that's there aligns with the design you have, maybe you speed along to shipping something, but all the large productions end up doing things custom somewhere, somehow, whether that's in the binary or through scripts.

But none of the code is necessary to do game design either, because that just reflects the symbolic complexity you're trying to put into it. You can run a simpler scene, with less graphics, and it will still be a game. That's why we have "10-line BASIC" game jams and they produce interesting and playable results. The aspect of making it commercial quality is more tied to getting the necessary kinds and quantities of feedback to find the audience and meet them at their expectations, and sometimes that means using a big-boy engine to get a pile of oft-requested features, but I've also seen it be completely unimportant. It just depends a lot on what you're making.


I think the main reason not to go full-throttle into "vibes -> machine code" (to extrapolate past doing it in C) is because we have a history of building nested dolls of bounded error in our systems. We do that with the idea of file systems, process separation, call stacks, memory allocations, and virtual machines.

Now, it is true that vibes results in producing a larger quantity of lower-level code than we would stomach on our own. But that has some consequences for the resulting maintenance challenge, since the system-as-a-whole is less structured by its boundaries.

I think a reasonable approach when using the tools is to address problems "one level down" from where you'd ordinarily do it, and to allow yourself to use something older where there is historical source for the machine to sample from. So, if you currently use Python, maybe try generating some Object Pascal. If you use C++, maybe use plain C. If there were large Forth codebases I'd recommend targeting that since it breaks past the C boundary into "you're the operator of the system, not just a developer", but that might be the language that the approach stumbles over the most.


Solus. Same install for five years running, rolling release, no breakage.


You will still need the tool but the interface to it may start to change.

A lot of the editing functions for 3D art play some role in achieving verisimilitude in the result - that it looks and feels believably like some source reference, in terms of shapes, materials, lights, motion and so on. For the parts of that where what you really want to say is "just configure A to be more like B", prompting and generative approaches can add a lot of value. It will be a great boost to new CG users and allow one person to feel confident in taking on more steps in the pipeline. Every 3D package today resembles an astronaut control panel because there is too much to configure and the actual productions tend to divvy up the work into specialty roles where it can become someone's job to know the way to handle a particular step.

However, the actual underlying pipeline can't be shortcut: the consistency built by traditional CG algorithms is the source of the value within CG, and still needs human attention to be directed towards some purpose. So we end up in equilibriums where the budget for a production can still go towards crafting an expensive new look, but the work itself is more targeted - decorating the interior instead of architecting the whole house.


I believe Lisp is relatively more understood than Forth these days, in that most of the "big ideas" that have been built in it have also been borrowed and turned into language features elsewhere. We have a lot of languages with garbage collection, dynamic types, emphasis on a single container type, some kind of macro system, closures, self-hosting, etc. These things aren't presented with so much syntactical clarity outside of Lisp, but they also benefit from additional engineering that makes them "easy to hold and use".

Lisp appeals to a hierarchical approach, in essence. It constrains some of the principal stuff that "keeps the machine in mind" by automating it away, so that all that's left is your abstraction and how it's coupled to the rest of the stack. It's great for academic purpose since it can add a lot of features that isolate well. Everyone likes grabbing hierarchy as a way to scale their code to their problems, even though its proliferation is tied to current software crises. Hierarchical scaling provides an immediate benefit(automation everywhere) and a consequent downside(automation everywhere, defined and enforced by the voting preferences of the market).

Forth, on the other hand, is a heavily complected thing that doesn't convert into a bag of discrete "runtime features" - in the elementary bootstrapped Forth, every word collaborates with the others to build the system. The features it does have are implementation details elevated into something the user may exploit, so they aren't engineered to be "first class", polished, easy to debug. It remains concerned about the machine, and its ability to support hierarchy is less smoothly paved since you can modify the runtime at such a deep level. That makes it look flawed or irrelevant(from a Lisp-ish perspective).

But that doesn't mean it can't scale, exactly. It means that the typical enabled abstraction is to build additional machines that handle larger chunks of your problem, but the overall program structure remains flat and "aware" of each machine you're building, where its memory is located, the runtime performance envelope, and so on. It doesn't provide the bulldozers that let you relocate everything in memory, build a deep callstack, call into third-party modules, and so on. You can build those, but you have to decide that that's actually necessary instead of grabbing it in anger because the runtime already does it. This makes it a good language for "purposeful machines", where everything is really tightly specified. It has appealing aspects for real-time code, artistic integrity, verification and long-term operation. Those are things that the market largely doesn't care about, but there is a hint of the complected nature of Forth in every system that aims for those things.


Bloat mostly reflects Conway's law, with the outcome of it being that you're building towards the people you're talking to.

If you build towards everyone, you end up with a large standard like Unicode or IEEE 754. You don't need everything those standards have for your own messages or computations, sometimes you find them counter to your goal in fact, and they end up wasting transistors, but they are convenient enough to be useful defaults, convenient enough to store data that is going to be reused for something else later, and therefore they are ubiquitous in modern computing machines.

And when you have the specific computation in mind - an application like plotting pixels or ballistic trajectories - you can optimize the heck out of it and use exactly the format and features needed and get tight code and tight hardware.

But when you're in the "muddled middle" of trying to model information and maybe it uses some standard stuff but your system is doing something else with it and the business requirements are changing and the standards are changing too and you want it to scale, then you end up with bloat. Trying to be flexible and break up the system into modular bits doesn't really stave this off so much as it creates a Whack-a-Mole of displaced complexity. Trying to use the latest tools and languages and frameworks doesn't solve this either, except where they drag you into a standard that can successfully accommodate the problem. Many languages find their industry adoption case when a "really good library" comes out for it, and that's a kind of informal standardizing.

When you have a bloat problem, try to make a gigantic table of possibilities and accept that it's gonna take a while to fill it in. Sometimes along the way you can discover what you don't need and make it smaller, but it's a code/docs maturity thing. You don't know without the experience.


One of the things I remember about myself and others as young people emerging in the years around Y2K, was that we were taught presumption at every opportunity. Pat answers from the elite circles were to be found for everything, and the referential aspects of pop culture were built on that; they could critique it, make satire, but they couldn't imagine a world without it, and therefore the conversation had a gravity of the inevitable and inescapable. Piece by piece, that has been torn down in tandem with the monoculture. A lot of it has been subsequently called out as something toxic or an -ism or otherwise diminishing.

Every influencer now has this dance they do with intellectual statements where, unless they intentionally aim to create rhetorical bait, they don't make bold context-free claims. They hedge and address all sorts of preliminaries.

At the same time, the entry points to culture have shifted. There's a very sharp divide now, for example, between online posting of fine art, decorative art, commercial art, and "the online art community" - influencer-first artists, posting primarily digital character illustrations on social media. The first three are the legacy forms(and the decorative arts are probably the least impacted by any of this), but the last invokes a younger voice that is oblivious to history - they publish now and learn later, so their artistic conversation tends to be more immature, but comes with a sense of identity that mimicks the influencer space, generally. Are they making art or content? That's the part that seems to be the foundational struggle.


I believe it's more nuanced than that. Artists, like programmers, aren't uniformly trained or skilled. An enterprise CRUD developer asks different questions and proposes different answers compared to an embedded systems dev or a compiler engineer.

Visual art is millennia older and has found many more niches, so, besides there being a very clear history and sensibility for what is actually fundamental vs industry smoke and mirrors, for every artist you encounter, the likelihood that their goals and interests happen to coincide with "improve the experience of this software" is proportionately lower than in development roles. Calling it drudgery isn't accurate because artists do get the bug for solving repetitive drawing problems and sinking hours into rendering out little details, but the basic motive for it is also likely to be "draw my OCs kissing", with no context of collaboration with anyone else or building a particular career path. The intersection between personal motives and commerce filters a lot of people out of the art pool, and the particular motives of software filters them a second time. The artists with leftover free time may use it for personal indulgences.

Conversely, it's implicit that if you're employed as a developer, that there is someone else that you are talking to who depends on your code and its precise operation, and the job itself is collaborative, with many hands potentially touching the same code and every aspect of it discussed to death. You want to solve a certain issue that hasn't yet been tackled, so you write the first attempt. Then someone else comes along and tries to improve on it. And because of that, the shape of the work and how you approach it remains similar across many kinds of roles, even as the technical details shift. As a result, you end up with a healthy amount of free-time software that is made to a professional standard simply because someone wanted a thing solved so they picked up a hammer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: