Hacker Newsnew | past | comments | ask | show | jobs | submit | mathiasgredal's commentslogin

Not exactly related to Ruby on Rails, but why doesn’t there exist a way to run a Python TKinter app in the browser using WASM?

I have a medium sized dependency-free TKinter program written in Python, and AFAIK there is no way to run it in the browser.


Because Tk draws using native OS APIs. So you’d need to run the whole OS in the browser.

Or you could implement a new backend for Tk to draw using some web APIs


Or he could use replit which did go to the trouble of implementing a web backend for the tk wm.

There's also a gtk backend for tk (gtkttk), as convoluted as that sounds, so presumably you could use gtk web implementations (broadway) behind that...

In any case, Tk doesn't require an OS to do window drawing, it's entirely up to how things are hooked up.


It’s been awhile, but if you have a browser old enough, here the browser plug-in for TCL/TK: https://www.tcl-lang.org/software/plugin/


That doesn't render a web interface, it just bundles the platform specific executable so you can run scripts embedded in the web browser. Kinda like a hole in the web page into which an exe is shoved.

Also, no browser runs that stuff anymore, and it wouldn'tve been safe to run even when they did.


In a society where the abuse of human labour was factored into the cost of the product, the 8-inch fab line would have been shut down, since the cost of the 8-inch wafers would now be prohibitive and not be competitive with the wafers from the 12-inch line. This in-turn would mean that customers would have to switch over to the 12-inch wafers.

We are not supposed to compete on who can abuse their workers the most to improve efficiency and to cut costs. Thankfully, knowledge work does not seem to scale the same way as manual labour, meaning that more abuse of the workers does not mean more output over the long-term.


We have a replacement for CUDA, it is called C++17 parallel algorithms. It has vendor support for running on the GPU by Intel, AMD and NVIDIA and will also run on all your cores on the CPU. It uses the GPU vendors compiler to convert your C++ to something that can natively run on the GPU. With unified memory support, it becomes very fast to run computations on heap allocated memory using the GPU, but implementations also support non-unified memory

Vendor support:

- https://www.intel.com/content/www/us/en/developer/articles/g...

- https://rocm.blogs.amd.com/software-tools-optimization/hipst...

- https://docs.nvidia.com/hpc-sdk/archive/20.7/pdf/hpc207c++_p...


Having looked briefly at the code I still think C++17 parallel algorithms are more ergonomic compared to OpenMP: https://rocm.blogs.amd.com/software-tools-optimization/hipst...


Is language support why people like OpenMP?

I think it is nice because it supports both C and Fortran, and they use the same runtime, so you can do things like pin threads to cores or avoid oversubscription. Stuff like calling a Fortran library that uses OpenMP, from a C code that also uses OpenMP, doesn’t require anything clever.


OpenMP has been around for a long time. People know how to use it, and it has gained many features that are useful for scientific computing.

The consortium behind OpenMP consists mostly of hardware companies and organizations doing scientific computing. Software companies are largely missing. That may contribute to the popularity of OpenMP, as the interests of scientific computing and software development are often different.


>> Is language support why people like OpenMP?

I use it sometimes with C++ because it is super easy to make "embarrassingly parallel" code actually run in parallel. And by using nothing but #pragma statements it will still compile single threaded if you don't have OMP as the pragmas will be ignored.


funny how we only get LoC between the different versions, but not the performance...

Of course the parallel algorithms are shorter, it's a more high-level interface. But being explicit gives you more control and potentially more performance.


GPGPU programming seems to be in a really good spot with the widespread adoption of C++17 parallel algorithms by GPU vendors.

Now, I can just program against this API using standard C++ code, that interacts with CPU heap allocated memory, and get really performant computation on it using standard map-filter-reduce semantics.


Which begs the question, if attractiveness is such a big predictor of success, why hasn’t everyone evolved to become very attractive? The evolutionary pressure for increased attractiveness should be very high, since it affects so many areas of your life, from career success, to getting partners etc.

Is it that the speed at which we evolve to become more attractive is superseeded by our ability to become better at discriminating for attractiveness?


It's possible that people have evolved to be more attractive than a long time ago, but the issue is not of absolute beauty but rather relative attractiveness. There will always be a top 10% and a bottom 10%, even if the entire population increases in attractiveness across the board.

But separately, it's not clear that attractiveness is hereditary in the same way that height, for example, is. If two tall people have kids, they will almost certainly be tall. It's also incredibly unlikely that two short people will not have kids that are tall.

With attractiveness, heterogeneity between generations is much more common. I know some very attractive people who have not-very-attractive offspring, and vice versa. It depends on how the features of the two parents mix together.


I mean at the very minimum there lies a great deal within our locus of control that can influence overall attractiveness. Are you in shape and well groomed? So many are not, and it’s weird that people don’t at least try to hit that Pareto inflection point within their own possible range of attractiveness, knowing, as we all do, that it makes a difference.


That was why OP suggested to have 2 motors on each joint, going in opposite direction. The problem with this is that you now have twice the amount of motors.


Oof, I appreciate you pointing that out because somehow I got the first part and skipped that one. Yeah, I could see that working, but it sounds inefficient.


Just use bsdtar, which will convert a tar archive to cpio: https://unix.stackexchange.com/a/581014

I have used it to convert docker images to Linux bootable initramfs archives for Rpi4.


Do you have link to that conversion? I wanna use containers to provision the pis, such a pain.


I've had good luck making environments in a qemu user chroot, then you can more or less just copy the whole filesystem to a SD card and boot it, considering the boot partition of course.


If you are already using gRPC in your codebase, then you can define your enums with Protobuf, which does much of the same as the tool shown in this article.


Ah yes, gRPC, something using which requires more ceremony and worse UX in Go than in …C# of all the languages.


Looks like great changes overall. Sadly they haven’t added columns to Dolphin, which is my only complaint from switching to KDE from macOS.


Looks like they added an option for split view in dolphin if that's what you mean.


A full-featured columns view, like that of macOS (but also implemented by various file browsers over the years), has an arbitrary number of resizable columns. Perhaps optionally with a preview pane at the end where the last column would otherwise be.

You can get the gist of it from this Super User discussion: https://superuser.com/questions/1141631/make-windows-10-file...


not sure what he means by columns either. but split view on dolphin exists for many years already.


Probably the three-column view where you see the parent folder, current directory and selected folder's contents side by side.


it goes more than 3 columns, its more like drill-down view and very handy... its something i miss from macos too

https://www.lifewire.com/use-finder-views-on-mac-2260734


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: