Hacker Newsnew | past | comments | ask | show | jobs | submit | rowls66's commentslogin

My initial reation was 'What is a CMS'? Naming your company or an initialism and never saying anywhere in the product description what the initials mean is not welcoming. Now I know that anyone who does not know that a CMS is a 'Content Management System' is probably not a likely customer, but you never know, and expanding the initials somewhere shold probably be possible.


You don't have to tip an Uber or Lyft driver either.


Preach.


I think more effort should have been made to live with 65,536 characters. My understanding is that codepoints beyond 65,536 are only used for languages that are no longer in use, and emojis. I think that adding emojis to unicode is going to be seen a big mistake. We already have enough network bandwith to just send raster graphics for images in most cases. Cluttering the unicode codespace with emojis is pointless.


You are mistaken. Chinese Hanzi and the languages that derive from or incorporate them require way more than 65,536 code points. In particular a lot of these characters are formal family or place names. USC-2 failed because it couldn't represent these, and people using these languages justifiably objected to having to change how their family name is written to suit computers, vs computers handling it properly.

This "two bytes should be enough" mistake was one of the biggest blind spots in Unicode's original design, and is cited as an example of how standards groups can have cultural blind spots.


UTF-16 also had a bunch of unfortunate ramifications on the overall design of Unicode, e.g. requiring a substantial chunk of BMP to be reserved for surrogate characters and forcing Unicode codepoints to be limited to U+10FFFF.


CJK unification (https://en.wikipedia.org/wiki/CJK_Unified_Ideographs) i.e. combining "almost same" Chinese/Japanese/Korean characters into the same codepoint, was done for this reason, and we are now living with the consequence that we need to load separate Traditional/Simplified Chinese, Japanese, and Korean fonts to render each language. Total PITA for apps that are multi-lingual.


This feels like it should be solveable with introducing a few more marker characters, like one code point representing "the following text is traditional Chinese", "the following text is Japanese", etc? It would add even more statefulness to Unicode, but I feel like that ship has already sailed with the U+202D LEFT-TO-RIGHT OVERRIDE and U+202E RIGHT-TO-LEFT OVERRIDE characters...


Unicode used to have a system of in-band language tags, but it was deprecated https://www.unicode.org/faq//languagetagging.html


There is a way to do it: https://en.wikipedia.org/wiki/Variation_Selectors_(Unicode_b...

However, it's not used widely and has problems with variant-naïve fonts.


Yeah. I would have favored something like introducing new codepoints with "automatic fallbacks" if the font doesn't support that codepoint, to ensure backward compatibility. There would be a one-time hardcoded mapping table introduced that font renderers would have to adopt.


> My understanding is that codepoints beyond 65,536 are only used for languages that are no longer in use, and emojis

This week's Unicode 17 announcement [1] mentions that of the ~160k existing codepoints, over 100k are CJK codepoints, so I don't think this can be true...

[1] https://blog.unicode.org/2025/09/unicode-170-release-announc...


Your understanding is incorrect; a substantial number of the ranges allocated outside BMP (i.e. above U+FFFF) are used for CJK ideographs which are uncommon, but still in use, particularly in names and/or historical texts.


The silly thing is, lots of emoji these days aren't even a single code point. So many emoji these days are two other code points combined with a zero width joiner. Surely we could've introduced one code point which says "the next code point represents an emoji from a separate emoji set"?


With that approach you could no longer look at a single code point and decide if it's e.g. a space. You would always have to look back at the previous code point to see if you are now in the emoji set. That would bring its own set of issues for tools like grep.

But what if instead of emojis we take the CJK set and make it more compositional. Instead of >100k characters with different glyphs we could have defined a number of brush stroke characters and compositional characters (like "three of the previous character in a triangle formation). We could still make distinct code points for the most common couple thousand characters, just like ä can be encoded as one code point or two (umlaut dots plus a).

Alas, in the 90s this would have been seen as too much complexity


Seeing your handle I am at risk of explaining something you may already know, but, this exists! And it was standardized in 1993, though I don't know when Unicode picked it up.

Ideographic Description Characters: https://www.unicode.org/charts/PDF/U2FF0.pdf

The fine people over at Wenlin actually have a renderer that generates characters based on this sort of programmatic definition, their Character Description Language: https://guide.wenlininstitute.org/wenlin4.3/Character_Descri... ... in many cases, they are the first digital renderer for new characters that don't yet have font support.

Another interesting bit, the Cantonese linguist community I regularly interface with generally doesn't mind unification. It's treated the same as a "single-storey a" (the one you write by hand) and a "two-storey a" (the one in this font). Sinitic languages fractured into families in part because the graphemes don't explicitly encode the phonetics + physical distance, and the graphemes themselves fractured because somebody's uncle had terrible handwriting.

I'm in Hong Kong, so we use 説 (8AAC, normalized to 8AAA) while Taiwan would use 說 (8AAA). This is a case my linguist friends consider a mistake, but it happened early enough that it was only retroactively normalized. Same word, same meaning, grapheme distinct by regional divergence. (I think we actually have three codepoints that normalize to 8AAA because of radical variations.)

The argument basically reduces "should we encode distinct graphemes, or distinct meanings." Unicode has never been fully-consistent on either side of that. The latest example, we're getting ready to do Seal Script as a separate non-unified code point. https://www.unicode.org/roadmaps/tip/

In Hong Kong, some old government files just don't work unless you have the font that has the specific author's Private Use Area mapping (or happen to know the source encoding and can re-encode it). I've regularly had to pull up old Windows in a VM to grab data about old code pages.

In short: it's a beautiful mess.


I entirely agree that we could've cared better for the leading 16 bit space. But protocol-wise adding a second component (images) to the concept of textual strings would've been a terrible choice.

The grande crime was that we squandered the space we were given by placing emojis outside the UTF-8 specification, where we already had a whooping 1.1 million code points at our disposal.


> The grande crime was that we squandered the space we were given by placing emojis outside the UTF-8 specification

I'm not sure what you mean by this. The UTF-8 specification was written long before emoji were included in Unicode, and generally has no bearing on what characters it's used to encode.


This article is nearly 15 years old (2013). According to center on budget and policy priorities, the number of SSDI beneficiaries has fallen from is peak in 2014. So this article was written about a trend that peaked a year after its publication and had reversed over the past 15 years. Odd that it would be reposted today.

https://www.cbpp.org/research/social-security/social-securit...


I think someone may have wanted to bring more attention to this because contrary to surface level opinion and the peak on the surface; there is actually a growing number of people who meet the requirements to be eligible for disability but who do not receive those benefits.

There's a longstanding fear, and one that seems well founded, where those that meet the requirements won't actually file for it because doing so will have an adverse effect on any potential future employment.

Disability discrimination during hiring is almost impossible to prove with AI these days.

I've heard horror stories from some people at the shelter where I volunteer. One guy apparently had a county representative check the wrong boxes on benefit documents by mistake during welfare processing, saying a person was disabled because of a health condition (sleep apnea iirc) when they weren't disabled or receiving disability, and those people couldn't even find retail work afterwards.

AI has opened the floodgates to all sorts of discrimination as a result of the weights and decision-making being a black box with no accountability.


Why would anyone buy their game engine when it is available for free? Seems like a solution for a problem that doesn't/won't exist.


This was my question to. I suspect they're worried about someone making a paid version with extra features which aren't contributed back to the community.

For a moment, I was thinking Defold ought to dual-license their engine under both the current non-OSS modified Apache and the GPL. That way, you'd have the option to either:

1. Commercialise software created using modified versions of Defold, without releasing the source of your modified version, as long as you don't commercialize your modified version of Defold itself.

2. Commercialise a modified version of Defold, but you must make the source available under the GPL. (Which would mean that source could be used by the upstream project as well.)

But while typing this up, I noticed the flaw in this plan—the parenthetical isn't true! Because the upstream project would be dual licensed, they couldn't use GPL licensed code.


>dual-license the engine under both their current non-OSS modified Apache and the GPL.

Of course if they do, I hope they will say "either modified Apache or the GPL".

My company's lawyers made a big stink about us using jQuery plugins that said "and" instead of "or".


Why do you assume the problem won’t exist, when this exact thing happens all the time? Just to name a tiny handful of obvious examples: Oracle, Canonical, GitHub, RedHat, DataStax. Not only could someone add enhancements that justify the price, like several other comments have pointed out here, they could also simply offer support that Defold doesn’t offer, and they could do marketing that Defold doesn’t do. The number of paid products that are equivalent to and/or based on free products is innumerable.

There’s no reason to assume that a paid fork would reduce the number of free Defold users; it can happen, but depends on what is built and offered, and sometimes paid forks are good for the ecosystem and increase the number of overall users.


Paid extensions are allowed, which seems like a neat compromise.

If you need to add an extra API or something to the core to make your paid extension work, you can't charge for that, which I think is designed to incentivise "improve the extension API, contribute that back to the core project, then go wild on your commercial extension and see if you can get people to pay for it."

I have no clue whether this approach will turn out to work in the medium-to-long term, but it's a fascinating idea and seems at the very least like an experiment very much worth conducting.


If they didn't prevent selling derivative game engines, someone could fork it and add a valuable feature that was only available in the paid fork. This could split the community.


Yeah, just look at how fractured the Godot community is.

(it's not)


That’s because the user base isn’t big enough yet for Amazon to make a paid version. Or no one at Amazon has figured out a way to monetize a game engine yet.

If they do, look out.


Godot's user base is already bigger than Amazon Lumberyard's (now O3DE).

Unlike Defold, both Godot and O3DE are actually free.


You can’t really look at overall user base, you have to look at the number of users making enough money off of Godot that they’re willing to pay for commercial features.

That number is vastly smaller.

If Godot games were making anywhere near the revenue that Untity games do, I’m willing to bet they’d there would be an Amazon fork.


It's you who brought user base size in.

(and it's still bigger for Godot)


And now I’m clarifying that I should have been more precise when I said user base. Amazon clearly doesn’t care about users that aren’t going to make them money.

It also doesn’t matter what the user base is today. It matters what it might be tomorrow. It’s not easy to change your license. If you wait until Amazon is coming for you, it’s too late to do anything to stop it.


It is a bit fractured - there is a Redot fork. But that was caused by governance issues, not anyone's desire to sell the engine.


The modified engine someone else is selling could have a potentially important extra features.

For example a company might try to sell a version of engine which has been ported to a console which original engine doesn't support. Game porting companies are very common and if it's their main business then they will usually have inhouse libraries or modified engine versions which significantly simplify the porting process.

That's exactly what's happening with open source game engines like Godot. Their documentation lists almost a dozen companies providing porting service for godot games. That isn't necessarily a bad thing, but it's up to author of game engine whether they want to allow others to profit from their work in such way.

Seems like currently Defold supported platforms cover most of the popular consoles, it was probably not the case during early development of engine when license was chosen or in a few years when next generation of consoles come out. Someone might also be selling a better console support than what defold provides out of the box. Beside the consoles there is also stuff like integration with various PC stores like GOG,Epic and others. Its not necessarily a huge work, but plenty of smaller devs want to focus only on the gameplay aspects. So once a game is finished (and you are tired from development process), buying anything which significantly reduces porting/integration effort can be an easy choice.

One more example of major feature which can require tight engine integration and motivate buying a modified version of "free engine" is multiplayer support. Good multiplayer support can be quite tricky with some game genres being harder than others. There have been many attempts at providing magic multiplayer solutions which under the hood automatically synchronizes all game entities without developer thinking about it. Such approach isn't necessarily going to be as good playing experience as designing the game with multiplayer support in mind from day 0, carefully thinking how the game state is organized, what when and how is synchronized. But that requires planning ahead, technical expertise and suitable budget. Commercial multiplayer middleware for existing engines are also not uncommon.

Whether something like that is considered an addon or modified engine version depends on exact licensing terms and the exact implementation details how game engine and addon code is organized.

A slightly different example - game engine built on top of game engine is RPG maker. For a long time RPGMaker has been it's own game engine. But few years ago developers of RPGMaker made a version of RPGMaker which is built on top of Unity. Plenty of other genre specific engines (especially for fighting games) built on top of general purpose game engines. Again the line between modified engine, addon and game with builtin editor is tricky.


RPG in a Box[0] is the first example that sprang to mind when I read these comments. It transforms Godot into a more generalized "game maker" but could arguably be considered selling the engine.

[0]https://rpginabox.com/

Edit: clarity


(Action Game Maker)[https://store.steampowered.com/app/2987180/ACTION_GAME_MAKER...] seems like its the same thing - it's built on Godot and seems like it's trying to provide a no-code interface to it.


Can a systems programming lanugage use garbage collection? I don't think so.


You´d be surprised.

In the 1980s, complete workstations were written in Lisp down to the lowest level code. With garbage collection of course. Operating system written in Lisp, application software written in Lisp, etc.

Symbolics Lisp Machine

https://www.chai.uni-hamburg.de/~moeller/symbolics-info/fami...

LMI Lambda http://images.computerhistory.org/revonline/images/500004885...

We're talking about commercial, production-quality, expensive machines. These machines had important software like 3D design software, CAD/CAM software, etc. And very, very advanced OS. You could inspect (step into) a function, then into the standard library, and then you could keep stepping into and into until you ended up looking at the operating system code.

The OS code, being dynamically linked, could be changed at runtime.


Seems to me that if Congress would prefer that any disputes arising from the implementation of their laws be handled by the admistrative agency charged with enforcing it, all they need to do is say so in the law. Not sure how the courts could get around that.


Difficulty: Congress almost never passes substantive laws anymore.

No way you’re getting a with-teeth EPA law, for example, through a modern Congress until we’re back to smog clouds over our cities and burning rivers and creating new, large cancer districts and superfund sites. And that’ll have to go on for a while before it happens. Then, maybe.

We’ve been coasting on good laws from the 70s and earlier, mostly, while weakening regulation has been eating our foundation like termites (especially Chicago-school-driven judicial rulings on how the executive is allowed to enforce anti-trust, in the late 70s, and later removal of media ownership consolidation rules). Now they (people who wish they could hurt people while making money and not be told to stop) can attack those good laws directly.


> Not sure how the courts could get around that.

Like any other law, they rule it as unconstitutional. Appeal its way up to the SCOTUS, and they will ultimately decide if they want to keep the power or give it back to the regulatory agencies.


If only congress was capable of passing laws


I found 'A Philosophy of Software Design' by John Ousterhout to be useful. It contains alot of solid easy to understand advice with many examples.


Great book, I've learnt a lot from it


I think whats more harmful than unit tests are code coverage metrics for unit tests that devs feel compelled or are required to achieve. The easiest way to achieve code coverage goals for tests is to write lots of small tests that test individual methods, but test very little of the interaction between them.

I feel that the goal of unit testing should be to test the largest unit possible without requiring external dependencies. In the language of domain driven design, this means to test the domain model. If you can get extensive coverage of the domain model as a whole, system tests can be used to test the complete system.

Alas, I have seen very few software systems with high quality domain models. It is not an easy thing to achieve.


You need to spend $16,666 just to cover the $500 annual fee. That’s a lot for many people especially when many low end retailers don’t accept AmEx. As already state rewards cards are a subsidy for the rich, or at least for people who spend a lot.


Consider the following...

If you conceptualize it as "Annual fee minus reward credits" it's far more reasonable than that. For example, Amex Platinum is $695/year, but the Walmart+ and Digital Entertainment credits are $395 combined. Compared to the next highest card in Amex's line - Gold @ $250 - this makes more sense that you might originally think.


Often the $500/year cards have near immediate rewards that cover the cost - if you use those rewards. For example, the CapOne X gives a $400 travel credit every year.


CSR is also $550 with a $300 travel credit that is not difficult to immediately redeem.


Yeah $500/yr fee is ridiculous. I think my card is 1/4 of that and I’m sure I break even the first month since almost everything I pay for goes through that card. Utilities, cell phone bills, after school program fees (and before that daycare), all other recurring bills besides mortgage/car, restaurants, groceries, anything else I want or need to buy go through the same card. Always pay the balance in full each month no matter how painful.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: