Hacker Newsnew | past | comments | ask | show | jobs | submit | librasteve's commentslogin

In the last years, simplistic languages such as Python and Go have “made the case” that complexity is bad, period. But when humans communicate expertly in English (Shakespeare, JK Rowling, etc) they use its vast wealth of nuance, shading and subtlety to create a better product. Sure you have to learn all the corners to have full command of the language, to wield all that expressive power (and newcomers to English are limited to the shallow end of the pool). But writing and reading are asymmetrical and a more expressive language used well can expose the code patterns and algorithms in a way that is easier for multiple maintainers to read and comprehend. We need to match the impedance of the tool to the problem. [I paraphrase Larry Wall, inventor of the gloriously expressive https://raku.org]

Not sure how I feel about Shakespeare and JK Rowling living in the same parenthesis!

Computer languages are the opposite of natural languages - they are for formalising and limiting thought, the exact opposite of literature. These two things are not comparable.

If natural language was so good for programs, we’d be using it - many many people have tried from literate programming onward.


Natural languages are ambiguous, and that's a feature. Computer languages must be unambiguous.

I don't see a case for "complex" vs "simple" in the comparison with natural languages.


I fully accept that formalism is an important factor in programming language design. But all HLLs (well, even ASM) are a compromise between machine speak (https://youtu.be/CTjolEUj00g?si=79zMVRl0oMQo4Tby) and human speak. My case is that the current fashion is to draw the line at an overly simple level, and that there are ways to wrap the formalism in more natural constructs that trigger the parts of the brain that have evolved to hanle language (nouns, verbs, adverbs, prepositions and so on).

Here's a very simple, lexical declaration made more human friendly by use of the preposition `my` (or `our` if it is packaged scoped)...

  my $x = 42;

How is that snippet any better than:

x := 42

Or

let x = 42

Or

x = 42

It seems like a regression from modern languages.


Exactly. I mean think about the programming languages used in aircraft and such. There's reasons. It all depends on what people are willing to tolerate.

>But writing and reading are asymmetrical and a more expressive language used well can expose the code patterns and algorithms in a way that is easier for multiple maintainers to read and comprehend.

It's exactly the opposite. Writing and reading are asymmetrical, and that's why it's important to write code that is as simple as possible.

It's easy to introduce a lot of complexity and clever hacks, because as the author you understand it. But good code is readable for people, and that's why very expressive languages like perl are abhorred.


> Writing and reading are asymmetrical, and that's why it's important to write code that is as simple as possible.

I 100% agree with your statement. My case is that a simple language does not necessarily result in simpler and more readable code. You need a language that fits the problem domain and that does not require a lot of boilerplate to handle more complex structures. If you are shoehorning a problem into an overly simplistic language, then you are fighting your tool. OO for OO. FP for FP. and so on.

I fear that the current fashion to very simple languages is a result of confusing these aspects and by way of enforcing certain corporate behaviours on coders. Perhaps that has its place eg Go in Google - but the presumption that one size fits all is quite a big limitation for many areas.

The corollary of this is that richness places an burden of responsibility on the coder not to write code golf. By tbh you can write bad code in any language if you put your mind to it.

Perhaps many find richness and expressivity abhorrent - but to those of us who like Larry's thinking it is a really nice, addictive feeling when the compiler gets out of the way. Don't knock it until you give it a fair try!


Perlis's 10th epigram feels germane:

> Get into a rut early: Do the same process the same way. Accumulate idioms. Standardize. The only difference(!) between Shakespeare and you was the size of his idiom list - not the size of his vocabulary.


Well sure - being in a rut is good. But the language is the medium in which you cast your idiom, right?

Here's a Python rut:

  n = 20  # how many numbers to generate
  a, b = 0, 1
  for _ in range(n):
    print(a, end=" ")
    a, b = b, a + b
  print()
Here's that rut in Raku:

  (0,1,*+*...*)[^20]
I am claiming that this is a nicer rut.

  seq = [0,1]
  while len(seq) < 20:
      seq.append(sum(seq[-2:]))
  print(' '.join(str(x) for x in seq))
> I am claiming that (0,1,+...*)[^20] is a nicer rut.

If it's so fantastic, then why on earth do you go out of your way to add extra lines and complexity to the Python?


Complexity-wise, this version is more complicated (mixing different styles and paradigms) and it's barely less tokens. Lines of code don't matter anyway, cognitive load does.

Even though I barely know Raku (but I do have experience with FP), it took way less time to intuitively grasp what the Raku was doing, vs. both the Python versions. If you're only used to imperative code, then yeah, maybe the Python looks more familiar, though then... how about riding some new bicycles for the mind.


err - I cut and pasted the Python directly from ChatGPT ;-)

Why is the sky black?

- at night (of course)

- there are ~1 septillion stars that are all shiny


If the universe was infinite and eternal you’d expect the night sky to be white - all the gaps between stars would be filled in with stars further away.

this! guess this is the definitive proof that the visible universe is finite

errr market monopoly forces are doing their thing … the point is that only a govt can force eg an OS + APP anticompetitive monopoly provider to split up into multiple companies


this


I get the sense that this author is looking for a DSL (domain specific language) and landed quite close.


Richard (the dev) has wrote a good "how to" post on the creation process

https://dev.to/finanalyst/creating-a-new-programming-languag...


Question: is it a good idea to introduce kids to coding in their mother tongue like this?


It's already been linked in comments here but there's been a bit of exploration in that area with Hedy. There's some good references to prior work and comments of relevance in this paper https://hedy.org/research/A_Framework_for_the_Localization_o....


i've always given the advice "program in english, comments, variables, function names, everything", and "always use a uk/us keyboard unless you absolutely have to enter localised strings, and even better get someone else to do that"


I've been working with a codebase for a very specific domain where hardly anyone even knew the English terms for the domain-specific things (and some of them probably didn't even exist - highly localized customs etc.). No point in using English then.


I’ve seen before that this is not followed in certain cases, such as the entire development team being in a specific country where some or many team members don’t know English (well enough). As an anecdote, I’ve seen a team in a large multinational company (US origin) in Spain that used function names, variable names, database table names (and column names), log message text and many other things in Spanish. English was only for the language keywords because that’s what the compiler would accept.


The smart projects that are going for L10N will collect all the UI strings into a file or set of files, separate from the code, and indexed so that the app can just switch language and then begin using a new set of localized strings. This also makes for easy translation where you don't need to rebuild the app, just expand the data files that it's using. Is this not the only way to build apps today, or are "localized strings" still being hardcoded??


Draig is built on L10N and code can be passed to/from Welsh <=> Other (eg English) … the keywords are translated back and forth, but comments and identifiers are whatever the coder writes.


China disregards that and there is an absolutely massive ecosystem of Free and Open Source Software out there if you can read and write their code.


I know little about china (except i like the food and art) but do they actually write code in their native language(s)?!


I can only speak for Japan, but I suspect China is the same. In Japan, English programming is the norm because all mainstream programming languages are written in English. Keywords, libraries and documentation are in English, so there's not really any getting around the fact that you have to learn to read at least some English. Some Japanese developers do write identifiers in Japanese where languages support it, and documentation / comments are often written in Japanese, of course.

I, personally, think this is a lamentable state of affairs that raises the barrier to entry for programming, especially for children. There are education-oriented Japanese programming languages that try to fill the niche for teaching children, but I think it would be beneficial if there were serious languages with a full ecosystem rather than ones designed to be training wheels before learning English programming languages.


Why not use a pre-processor or something like it to simply translate the keywords etc? I know that there isn’t a 1:1 match between English words and words in other languages, but you should be able to get something close enough.


I actually have done that, but there are still problems. It doesn't really do anything to help somebody who can't read English because things like error messages and libraries are still in English, and it doesn't play nicely with IDE tooling, which is fixable in open-source editors but not proprietary editors. It ends up being a lot of effort for an experience that feels very much second-class.



The first STM32 "bluepill"-based SCSI to SD adaptor I ever used had all its source code in Chinese.

Google Translate did a not terrible job of turning all the comments into English but also mangled the code in exciting new ways, but with a bit of ingenuity to apply the Artificial Intelligence translations, and a bit of bloodymindedness when applying the Analogue Idiocy to hacking it all about with search-and-replace, I got a pretty plausible translation of it.


Nope, they just add huge Chinese comments.


As someone who's spoken English since 5, I'm perplexed by this question. I'm genuinely unfamiliar with any perceived downsides and I would love to hear more of your thoughts


No. You want 'for' to be a looping construct with no other meanings.

Seeing code in my native language makes me laugh, I can't take it seriously.


I strongly disagree. Take, for example...

  foreach (apple in fruitbasket)
    apple.Eat()
vs.

  for (int i = 0; while i < fruitbasket.Count; i++)
    fruitbasket[i].Eat();
Even as a low-level programmer, I truly loathe C-style for loops. It takes several seconds to parse them, while the C#-style foreach is instantly grokkable with zero mental overhead. When you're scanning over thousands of lines of codes, the speed and ease of reading constructs like these adds up and makes a huge difference. The desire to apply human-friendly syntax to low-level programming is among the greatest motivating factors for the language I'm working on. All of that being said, I think there is a huge advantage in having code that reads like natural language you understand, rather than having keywords that are foreign and meaningless to you.


Coming soon to a programming ecosystem near you:

LLM(eat apples in fruitbasket)

vs

foreach (apple in fruitbasket) apple.Eat()

Your comment can be repeated almost word for word here.


Not at all. I'm comparing two different syntaxes that can reliably compile to the same machine code. A syntax that produces non-deterministic results is a completely different matter.


> non-deterministic

You can control the amount of non determinism.

And also, it is interesting that you think modern compilers are deterministic.


While an LLM with a fixed seed is technically deterministic, it’s still unpredictable.

> And also, it is interesting that you think modern compilers are deterministic.

A compiler, unless it has a bug, will always produce output matching the specification of the language.


> the specification of the language

Guess what language the specification is written in (if it exists at all) ?

It's usually natural language, and even then, compilers deviate from the specification all the time, and the specification has bugs.

Formal specifications are a thing, but the science is nowhere mature enough to dictate what compilers do.

> unless it has a bug

Compilers are not magic, the way it follows specifications is up to the interpretation of compiler developers like yours truly. And there are tens of thousands of compiler bugs. There was a notorious LLVM pass that had more bugs than lines of code ;)

https://github.com/llvm/llvm-project/issues?q=is%3Aissue

This is a list for just the last few years after LLVM switched to the using github for tracking issues.


Please do not engage in bad-faith arguments centered around pedantry. You know very well what I mean.


I know what you mean, and it was correct a year or more ago. Now, you are wrong.

AI is reliable enough to not mess up this translation now, especially if you configure it right (top p, and temperature parameters).

This abstraction lifting is absolutely happening in front of our eyes now. For the exact same reason the C for loop is less readable.

The different is that you don't yet store the prompts, just the generated code. That difference is not going to last too long. Storing prompts and contexts along with generated code is likely how we are going to be doing software engineering for a few decades before whatever the next leap in technology works out to be.


You could just lock the seed to get "deterministic" behaviour, but you are missing the point of programming languages completely. Programming languages are a set of rules that ~guarantee predictable behaviour down to the bit level. If you were to try to recreate that with LLMs, you run into two problems: one, your LLM is now a programming language where you have to put exact specific inputs in to get the correct outputs, except you don't have a specification for the inputs and you don't know what the correct outputs even look like because you aren't a software engineer. Two, even with a locked seed, the LLM is still going to output different code based on the exact order of letters in your prompt and any change in that will change the output. Compilers can execute a variety of optimizations on your code that change the output, but in the end they are still bound to hard rules and you can learn the rules when you get output that does not match what your expectations were from the input. And if there is a bug in the execution of the rules by the compiler, you can fix it; that is not possible with an LLM.

This talk about replacing software engineering by people who have no idea what software engineering is gets unbelievably tedious. The advent of Javascript did nothing to replace software engineers, it just created an entirely new class of developer. It lowered the barrier to entry and allowed anybody to write inefficient, bloated, buggy, and insecure programs. Our hardware is advanced enough that for many trivial applications there is sufficient overhead for inefficient and bloated programs to exist and be "good enough" (although they are causing untold damage in the real world with security breach after security breach). However, lowering the barrier to entry does not replace the existing engineers. You still need a real software engineer to develop novel applications that use the hardware efficiently. The Duchies of Javascript and Python are simply a new country founded adjacent to, and depending upon, the Software Engineer Kingdom. Now a new duchy is being founded, one that lowers the barrier to entry further to make even more inefficient, even more bloated, even more buggy, and even more insecure programs easier than ever. For some use cases, these programs will be good enough. But they will never replace the use cases that require serious engineering, just as Javascript never did.


> Programming languages are a set of rules that ~guarantee predictable behaviour down to the bit level.

No, that’s your interpretation of what a programming language is, based on nostalgia and wishful thinking.

If you get a programming system that sacrifices these and works better, people are going to use those.

FYI I am a compiler developer who has contributed to about three widely used compilers, I assure you I understand what programming languages and compilers are.


> If you get a programming system that sacrifices these and works better, people are going to use those.

Sure. That's a very large "if", though, one that evaluates to false and will continue evaluating to false for the foreseeable future. I am told over and over that I am being replaced and yet I have not seen one singular example of a real-world application of vibe coding that replaces existing software engineering. For starters, where is the vibe-coded replacement for Clang? For Linux? For IDEs? For browsers? I don't mean a concept of an idea of a browser, of course. To make the claim that the new programming system works better than the old system, it must produce something that is actually superior to what people currently use. LLMs are clearly completely and totally incapable of this. What they are capable of is making inferior software with a lower barrier to entry. Superior software is completely off the table, and it is why we don't see any existing software being replaced at scale, even though existing software absolutely has flaws and room for improvement that a "better programming system" would be able to seize upon if it existed.

That you can confidently assert that I am wrong, while having zero real-world evidence of superior software engineering produced by the new system, is indicative of a certain level of cultish thinking that is overtaking the world. Over and over and over again, people keep making these grand claims promising the world is changed, and yet there is no tangible presence of this in reality. You simply demand belief that it will change, as though LLMs are a new religion.


You are wrong about modern AI not being able to produce deterministic output for the for loop we are talking about.

You are right about modern AI producing superior software engineering.

You should read the chain of comments again and try to understand the non deterministic leaps of logic you have made :)


> Storing prompts and contexts along with generated code is likely how we are going to be doing software engineering for a few decades

You specifically asserted that prompts would be the future of software engineering. If my memory is not mistaken, this was edited later to hedge with "likely" and did not include that when originally written?


> If my memory is not mistaken

Your memory is as impeccable as your logic :)

> asserted that prompts would be the future of software engineering.

That’s exactly not what I asserted. Can you find the critical difference?


We'll have to leave it here then. I am fairly confident you edited the comment after the fact, but I cannot prove it, so further discussion is fruitless.


> Look at it. It's beautiful.

Quite right too … I’m choosing HTMX over React for just that.


embrace the brace, dude


well Raku has the Slangify module https://raku.land/zef:lizmat/Slangify


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: