Hacker Newsnew | past | comments | ask | show | jobs | submit | bibyte's commentslogin

Does anyone have a link to the source code? I was thinking about writing some code in asyncio to compare but I couldn't find the code.


A very important project because it is one of the two browsers on Android that support extensions (the other one is Firefox Mobile). I didn't know it was closed source.


It had a GitHub repo (https://github.com/kiwibrowser/android) described as "source code used in Kiwi", but it was just a Chromium codebase thrown there without the actual patches.

Glad to see it's open source now however there is no commit history (thus no individual patches) and it's not possible to see which version of Chromium this was forked from.


Yandex mobile does chrome extensions.


Thanks for letting me know. It is good to be aware of alternatives.


If you trust yandex enough to allow them to possibly gather all your browsing activity, passwords, ... It's like trusting tencent.


How about Google?


Do you have proof for this.?


For example take a look at the Python API for Z3. Because of operator overloading it is just beautiful. Much better then raw SMT2.

https://ericpony.github.io/z3py-tutorial/guide-examples.htm


Sympy is nice too.


Isn't Common Lisp's save-lisp-and-die roughly analogous Pyinstaller? I mean they both work but nobody seems to be using them. In my experience Common Lisp code is most commonly deployed with an interpreter.


Most Common Lisp is actually compiled, not interpreted.

If one compiles code or loads compiled code to something like SBCL and dumps an image, you can both run it directly from that image AND have still have the development tools included.

Some Common Lisp implementations also have delivery tools which create applications without development tools. Some applications though include the development tools, because they are thought to be extended by the user. For example a CL-based CAD system might exposed Common Lisp as the scripting language for CAD extensions.


Of course many implementations do compile to machine code but there are also a few that don't (clisp, ecl, abcl). It is really complicated to talk about languages that have multiple implementations. That is why I went with "interpreter". That was pretty lazy of me :).

But you didn't say anything about the original question. Have you seen Common Lisp code deployed as an executable or as source code?


Interpreted in Lisp usually means 'executing s-expressions by a Lisp interpreter'.

ECL and CLISP also use compilers. ECL usually compiles to C, which then is compiled to native code via a C compiler. CLISP has its own virtual machine, which is the target of its compiler. Don't know what ABCL does, would have to look it up - but in the end it usually would execute JVM code, which gets compiled by its JIT compiler.

Question of code deployment. I have seen deployment as an executable, deployment as an executable with also loads compiled code (for patches) and deployment as source code. Especially commercial applications and/or end-user applications tend to deploy as executable. For example ScoreCloud is a LispWorks application as an executable and can be bought on Apple's Mac app store.

My Lisp Machine at home boots into a Lisp image which contains a whole operating system as compiled Lisp code. That's exotic, but many other Lisp development environments already come as compiled Lisp executable. For example on the Mac and on ARM64/Linux I use LispWorks. On the Mac it is the usual Mac application bundle, where the executable already contains much of the IDE and then loads some code on demand from compile code files.


> ScoreCloud is a LispWorks application as an executable and can be bought on Apple's Mac app store.

Wow, super impressive. You didn't post this on reddit yet :)


The word "compiler" is so overloaded that every time someone mentions one it turns into an argument. This is the whole transpiler vs compiler vs interpreter vs JIT argument. The trouble is that sometimes some programmers implement all of them (myself included).

> CLISP has its own virtual machine, which is the target of its compiler

Then CPython is also a compiler? and Lua?


Yes, generally any source to byte-code translator is a compiler. The byte-code then can either be interpreted or compiled, too. Since the default CPython implementation interpreted the byte code, it was said it's an interpreter - but strictly speaking it is not.

Originally interpreters were implementations which execute source code - which in Lisp is widely available. For example SBCL for a long time only had a native code AOT compiler, but now also includes s-expressions.

Famous is also the Lisp interpreter in Lisp from McCarthy, where he defined the core language in itself. See the paper from Paul Graham about that: http://languagelog.ldc.upenn.edu/myl/llog/jmc.pdf That's a very primitive Lisp interpreter - many real ones are implemented in C, Assembler...

Just be aware, that in the Lisp world 'Interpreter' means something very specific: an interpreter of the source language.


Fair enough. My assumption was that you would consider a source to bytecode compiler an interpreter (to be fair most people do). The next time I say something about Common Lisp I will list all the implementations instead of saying something as simplistic as "compiler" or "interpreter".


It's also complicated because some implementations support several modes. ECL for example has a Lisp interpreter, a byte-code compiler&interpreter and a native compilation via C. All code variants can work together and call each other.


To be clear, ABCL (if I understand correctly) compiles to JVM bytecode, which is analogous to "machine code" in the context of its target platform (and hell, is sometimes even literally machine code if you lack any semblance of sanity: https://en.wikipedia.org/wiki/Java_processor).

And ECL does ultimately compile down to machine code if you include "compile the resulting C output" in the compilation process, though this is optional.


The binary dump feature is used by projects like StumpWM, Lem and Next.


If you want this on Android it is available in the Termux repository. If you have a flagship device it should be fast enough to be usable.


What Android tablet is comparable to a modern iPad pro in performance, screen and battery life? I really like the iPad pro but I hate the artificial restrictions... All the Android tablets I tried out, flag ship ones, just fell down compared. I have a Surface Go for when I need Windows, but that is probably one of the worst buys of my life.

I currently use iPad Pro with Remotix to access OS X + Windows which works really well. If there was a Remotix for Linux, I would use Linux only, but there is no comparable VNC client for Linux. Compared the ones under Linux are unusable (and I did try them all).


What about the Samsung Galaxy Tab S6? It seems to be comparable to the iPad Pro in terms of performance and screen. I tried out an Android tablet a few months ago and it seemed pretty fine.

If your want to do portable programming I would recommend a light 2 in 1 laptop. You can dual boot Windows and Linux on it (Windows for media consumption and Linux for programming). But it is bigger then a tablet.

Of course if you use a server you can use anything you want to do programming. But personally the latency is a dealbreaker for me.


I will try the S6; it seems good but someone told me there are no really solid keyboards for it? I never checked it because I already had an iPad.

> But personally the latency is a dealbreaker for me.

I had that issue too :) However that changed when remotemac (no affiliation) introduced me to Remotix; I live in the mountains and my internet is bad, and yet it has no latency with 4k screen transferring. Anything on Linux is just horrible compared. I develop apps on my iPad and it works really well like that.

My ‘dream’ (bit extreem but lets say preferred way of working) is to use my x220 with Linux to do that; then I can do all dev on that machine while doing iOS dev remotely. Now I still have to carry an Android tab, Windows laptop (I can do that on the x220 but I really do not like to dual boot) or iPad with me.


If you don’t mind moving away from the VNC protocol, the NX protocol (via NoMachine¹) is a fantastic alternative. Faster than VNC and can forward audio from the server to the client as well.

――――――

¹ — https://en.wikipedia.org/wiki/NX_technology


I started seeing this a few days ago. What is even more exasperating is that the default Reddit doesn't support tabs and it is INSANELY slow. I can stream 4K content on YouTube comfortably but I have to wait multiple seconds to open a text thread. Maybe I should try the Boost client...


If you're after tabs, there is, of course, a workaround. I use firefox on desktop and mobile, with extensions to redirect to old.reddit.com and i.reddit.com respectively. Because logins are saved, everything just... works. Opening links in a new tab included.

At least until only Reddit's new design is allowed, and I stop using the site.


I can't edit it now but I was talking about the default Reddit app for Android, not the new mobile website. The website was pretty fast for me. But because of these new changes you can't even view a subreddit without using the app.

I just installed Boost for Reddit and it is pretty fast. But thanks for letting me know about i.reddit.com, I didn't even know that exists. The old Reddit is still the best on desktops, I hope they don't kill it.


My takeaway was the exact opposite of yours. For example the goto graph seems very intuitive and simple. But it is much more complex to reason about compared to exceptions. My takeaway was that simple graphs like these can't really explain the pros and cons of high level programming constructs.


The tradeoff in goto-vs-exception is that a goto needs an explicit label, while an exception allows the destination to be unnamed, constrained only by the callstack at the site where it's raised.

That makes exceptions fall more towards the "easy-to-write, hard-to-read" side of things; implied side-effects make your code slim in the present, treacherous as combinatorial elements increase. With error codes you pay a linear cost for every error, which implicitly discourages letting things get out of hand, but adds a hard restriction on flow. With goto, because so little is assumed, there are costs both ways: boilerplate to specify the destination, and unconstrained possibilities for flow.

Jumping backwards is the primary sin associated with goto, since it immediately makes the code's past and future behaviors interdependent. There are definitely cases where exceptions feel necessary, but I believe most uses could be replaced with a "only jump forward" restricted goto.


It's even clearer when you consider higher-order functions. These diagrams are only able to represent functions of order at most 2, i.e. what is called "dependency injection".


Can you please elaborate on those problems ? Personally I believe exceptions are the worst way to handle errors, except for all other ways.


Python's soft static type rules and extremely flexible variable instantiation unfortunately turns every line of code into a potential runtime error, because the code can't know at compile time if a variable was introduced to the global context that could bind to any given name. Typos are therefore deadly at runtime in Python in a way that they fundamentally aren't in other languages; if you try to write

foo = 0

if foh == 1: # oh no, I misspelled the variable

... many languages (including F#) will fail to compile the program because 'foh is uninitialized.' Python can't know if 'foh' is intended to be a global variable and so will execute the program and only determine while evaluating that line that 'foh' doesn't exist. This is especially insidious in error handling, where the error codepath isn't necessarily exercised; essentially, Python's flexibility implies that if your unit tests don't have 100% line coverage, you can't even know if your program is basically devoid of simple variable typos naming never-existing variables (a check most languages give you for free).

In a language with that feature, a rich ecosystem of exceptions is almost necessary, because every line of code could hide a runtime exception!


Maybe I am missing something but I think every dynamic typing system has this problem. I can't see what this has to do with Python and exceptions in general. It is not like there aren't statically typed languages with exceptions. This comment is an excellent critique of dynamic typed systems but it has nothing to do with exceptions.


The top-thread post was referencing Python, so I'm speaking in the Python domain specifically. Other dynamic languages have this problem also, and other languages with stronger static type guarantees do support exceptions.

I haven't encountered a language with dynamic typing of the sort Python has that doesn't also have a robust runtime exception system, and it'd be interesting to see what that looks like. There's probably some old flavors of BASIC that fit that mold (i.e. variable declaration is not required and also the only thing it offers for exception handling is setting a label to GOTO if a runtime exception occurs).


I guess I was too fixated on the exceptions part of your comment. You are right that due to Python's dynamic nature every line can turn into a runtime exception. Maybe it's because English is my second language but I couldn't understand that in your first comment.


I can't speak for other people but the only reason I still use Python is because of it's syntax (but the "one way to do it" is slowly changing). You get much better concurrency with Golang and even better parallelism with Julia.

It is dominant in data science but still some people use it for other things because the syntax is so simple (Scheme is a close second for me).


If you mean the pseudo-code like quality that (sadly) remains a distinguishing feature, I completely agree.

Simplicity, as in terms of specifying the grammar though? Python's grammar is exceedingly gnarly by now and I'm pretty sure Go, Lua, Prolog, Smalltalk and pretty much every Wirth language are massively simpler, syntactically.


Yes, I meant the pseudo-code quality of Python (I wish Wikipedia would use Python as it's pseudocode :)).

I completely agree that Python is pretty hard to implement. Perhaps you have heard that simple doesn't mean easy ? I could easily implement a Forth (or assembly) interpreter. But it isn't that easy to understand a big complex Forth program (compared to Python). Simple languages like Forth and assembly are too unstructured for me.


I'm sceptical of the idea that the good things about python make it hard to implement. Compared to forth – sure! But most of the non-simplicity beyond what's attributable to not being an ultra-bare bones languages comes from organic growth and bad design decisions.

From the top of my head, here's a list of things that were awesome in python's design:

1. Clean and concise syntax (whitespace, slices, rest and kwargs, multiline and raw strings, although both badly designed were ahead of the time)

2. good default builtin datatypes, with decent API (cf garbage like cons cells or STL)

3. in particular strings as immutable vector of bytes, no seperate character type (this got screwed up in python3 of course)

4. comparison on compound datatypes works out of the box

5. relative uniformity no value vs reference type; objects mostly just dicts, modules as well

6. repr and repl (both gimped, compared to lisp, but good enough to be super helpful in development), good tracebacks

7. anti-footgun conventions (notably: mutating functions return None convention; mutable -> no __hash__ convention; slicing copies convention)

Nothing in this list seems particularly hard on the implementor, but it's of course hardly exhaustive or unbiased. What things that you really like about python do you think are rather hard to implement?


Implementing a minimal Python is pretty easy (even with the huge standard library) but implementing a complete Python implementation is a herculean task. And even after that there is no guarantee that every Python library will run on your implementation (Pypy).

I don't know about you but I would rather make hundreds of Scheme and Forth implementations then one Python interpreter. But I don't really see that as a downside. I mean who decides on a language based on how easy it is to implement ?


Well, all things being equal a cleaner and more understandable language is going to be easier to implement, so I don't think difficulty of implementation should be dismissed so easily. Especially when the difficulty of implementation is not due to advanced features such as a fast runtime, resumable exceptions, proper tail calls, multi-method dispatch, pattern matching, powerful concurrency abstractions, SMP...


Sometimes it feels like I am living in an alternate reality. After I started using Arch Linux as my main OS I was fully prepared for it to break every few months. But it's been more then 2 years (maybe even 3 or 4 ?) and not one breakage. Also keep in mind that I update it every single day. From time to time I check the Arch Linux news site to see if anything needs manual intervention. So far I haven't needed to do anything.


> Also keep in mind that I update it every single day.

I can't remember what I was looking at, but within the past week I ran across a comment that said Arch Linux only supports constant upgrades such as that, and any delays that result in skipping a version are what risk causing breakages. The commenter was very surprised that they were even thinking about supporting a version jump on whatever the thread was about (pretty sure it was somewhere here on HN).


I also have another VM that I use from time to time. I think my longest update was about after a year. I don't remember having any problems like that.


can confirm, went to arch linux due to constant breakage under Debian & Ubuntu, it's be so much more pleasant since then. No more everything exploding left and right as soon as I need a newer GCC or libav version.


Changing the system default GCC version, like it seems you did, is not good practice in environments with conservative package policies.

There's also no need to do that, since nobody prevents users from installing newer versions alongside old ones, and invoking them directly.

Even not considering the fact that different GCC versions can coexist, complaining about this breakage in absolute terms doesn't make any sense. Releases changing default compiler versions inevitably cause more breakages with packages who haven't been updated to be compatible with newer GCC versions. So ultimately it's a matter of choosing the distributions with the appropriate release model, not a matter of Ubuntu/Debian breaking stuff.

Libav has also been deprecated in Ubuntu long ago, so it's not clear what you refer to. If you happen to refer to the transition from and back to ffmpeg, that's very old history.


> Libav has also been deprecated in Ubuntu long ago, so it's not clear what you refer to

I'm referring to the libav* libraries which are part of the ffmpeg project (not the horribly-named libav fork) - and external debian repos providing updated version of those (due to better codec support in media players, etc) such as debian-multimedia


That's funny because I've been running Debian Sid for 15 years as my main OS doing weekly updates and the two single cases of breakage I've seen were glibc6 transition (which was announced and expected) and proprietary video card drivers. You must be thinking of Ubuntu specifically.


To keep the anecdotes going, I've been using a mix of Debian and Ubuntu, both stable/LTS and testing/biannual, for pretty much exactly the same 15 years as you've been running Sid and I've never had any breakage that wasn't caused by me fucking around with binary drivers.

ndiswrapper was the main cause back in the day, shockingly giving Windows drivers access to the Linux kernel can cause problems. The most recent time was when VDPAU was new and I was trying to get HD video playback working on a mini-PC with an nVidia Ion GPU by running a version of the nVidia driver much newer than Ubuntu packaged. Now that I think about it that must have been around a full decade ago.


I'm not old enough to be able to run a Linux distribution for 15 years. And I also never run Debian on any of my personal computers. Thus, my anecdotes are definitely way less convincing. In other words, you can stop reading here.

The only two major distributions that I used for a sufficient amount of time are Ubuntu and Arch. Arch "unstableness" is exactly what I want most of the time on my personal computer as it's my to-go Petri dish. "Stability" would mean that it's harder for me to break it apart, and make a Frankenstein out of it. That's exactly what I have been experiencing with Ubuntu LTS releases --- stability.

Most of the time, I want to have all the available LLVM versions alongside with all the GCC versions, with all the available binutils (Qemu, Docker, Oracle VBox, etc.) versions on the latest kernel full of my monkey patched printk's. When I finally get to break its back I dive the Wiki for few hours to restore it.

I can imagine a non-office, hacking desktop OS that follows the Arch packaging strategy being highly successful.

I also maintain a few compute servers for 10-20 people. They are on Ubuntu LTS. The packages that I need there are always the ones that just work and don't let anyone do anything "cutting edge".


This is the proper response, imo.

Arch is rolling release and you take the good with the bad. The ones who try to defend arch as some paragon of stability miss the point that Arch's model is inherently unstable, but it comes with other benefits.

I'm the one who kicked off this entire conversation pointing out that arch is unstable, and it cracks me up watching silly people scramble to try and defend Arch as being some paragon of stability.

No, it's not. That's baked into its identity.


Aside from patching the kernel I have done everything GP said. Just because the Arch wiki says it is unstable doesn't mean it always is. It just means Arch Linux can do breaking changes (systemd) without worrying about backwards compatibility. And FYI I wasn't defending Arch Linux. It just seems strange to me that everyone is having instability problems and I can't even reproduce it.

I also agree that you shouldn't run your production database on Arch Linux. It isn't made for workloads like that. But personally I find maintaining Arch Linux+"custom packages"(with AUR) easier then Debian+"latest packages"+"custom packages".


That is only if you use the debian-specific definition of "stable" which is "does not change". The rest of the world thinks of "less bugs" when they think of stable software.


yes, because randomly declaring the other person as using a different definition somehow adds to the conversation and changes their point.

Back here in reality, rolling release is less stable because more bugs in the software get through. And this is a reasonable expectation and not some magical fairyland where bugs never get written so being right up against the dev branch is as stable as being on the stable branch.


> Back here in reality, rolling release is less stable because more bugs in the software get through.

We really live in two different software worlds. Every software I'm using has its number of bugs a purely decreasing function of time, especially in the "main" paths and use cases.


Your statement is a logical paradox.

If it were true, it means there wouldn't be bugs in the first place because they wouldn't have gotten written. The very fact that the bugs got written implies new bugs can, and will, be introduced.


I'm running Debian testing since 4.0 beta and never had an un-bootable system or something got broken due to normal updates. I've just reinstalled the system once to migrate it to 64 bit, since there was no proven procedure to migrate a running 32 bit system to 64 bit.

There were rough patches along the way but, all distros were having the same problem in someway or another (vdpau, fglrx, multi-gpu support, ndiswrapper & wireless stuff, etc.) but it's a set it and forget it affair for a very very long time.

...and obligatory xkcd: https://xkcd.com/963/


No, I only ran Ubuntu a few times. My worst breakages were on debian (generally testing). I remember the mysql packages completely killing my apt as well as grub updates wreaking havoc as two individual examples.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: