Hacker Newsnew | past | comments | ask | show | jobs | submit | m-schuetz's commentslogin

Agreed. It has way too much completely unnecessary verbosity. Like, why the hell does it take 30 lines to allocate memory rather than one single malloc.

just use the vma library. the low level memory allocation interface is for those who care to have precise control over allocations. vma has shipped in production software and is a safe choice for those who want to "just allocate memory".

Nah, I know about VMA and it's a poor bandaid. I want a single-line malloc with zero care about usage flags and which only produces one single pointer value, because that's all that's needed in pretty much all of my use cases. VMA does not provide that.

And Vulkans unnecessary complexity doesn't stop at that issue, there are plenty of follow-up issues that I also have no intention of dealing with. Instead, I'll just use Cuda which doesn't bother me with useless complexity until I actually opt-in to it when it's time to optimize. Cuda allows to easily get stuff done first then check the more complex stuff to optimize, unlike Vulkan which unloads the entire complexity on you right from the start, before you have any chance to figure out what to do.


> I want a single-line malloc with zero care about usage flags and which only produces one single pointer value

That's not realistic on non-UMA systems. I doubt you want to go over PCIe every time you sample a texture, so the allocator has to know what you're allocating memory _for_. Even with CUDA you have to do that.

And even with unified memory, only the implementation knows exactly how much space is needed for a texture with a given format and configuration (e.g. due to different alignment requirements and such). "just" malloc-ing gpu memory sounds nice and would be nice, but given many vendors and many devices the complexity becomes irreducible. If your only use case is compute on nvidia chips, you shouldn't be using vulkan in the first place.


> Even with CUDA you have to do that.

No you don't, cuMemAlloc(&ptr, size) will just give you device memory, and cuMemAllocHost will give you pinned host memory. The usage flags are entirely pointless. Why would UMA be necessary for this? There is a clear separation between device and host memory. And of course you'd use device memory for the texture data. Not sure why you're constructing a case where I'd fetch them from host over PCI, that's absurd.

> only the implementation knows exactly how much space is needed for a texture with a given format and configuration

OpenGL handles this trivially, and there is also no reason for a device malloc to not also work trivially with that. Let me create a texture handle, and give me a function that queries the size that I can feed to malloc. That's it. No heap types, no usage flags. You're making things more complicated than they need to be.


> No you don't, cuMemAlloc(&ptr, size) will just give you device memory, and cuMemAllocHost will give you pinned host memory.

that's exactly what i said. You have to explicitly allocate one or the other type of memory. I.e. you have to think about what you need this memory _for_. It's literally just usage flags with extra steps.

> Why would UMA be necessary for this?

UMA is necessary if you want to be able to "just allocate some memory without caring about usage flags". Which is something you're not doing with CUDA.

> OpenGL handles this trivially,

OpenGL also doesn't allow you to explicitly manage memory. But you were asking for an explicit malloc. So which one do you want, "just make me a texture" or "just give me a chunk of memory"?

> Let me create a texture handle, and give me a function that queries the size that I can feed to malloc. That's it. No heap types, no usage flags.

Sure, that's what VMA gives you (modulo usage flags, which as we had established you can't get rid of). Excerpt from some code:

``` VmaAllocationCreateInfo vma_alloc_info = { .usage = VMA_MEMORY_USAGE_GPU_ONLY, .requiredFlags = VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT};

VkImage img; VmaAllocation allocn; const VkResult create_alloc_vkerr = vmaCreateImage( vma_allocator, &vk_image_info, // <-- populated earlier with format, dimensions, etc. &vma_alloc_info, &img, &allocn, NULL); ```

Since i dont care about reslurce aliasing, that's the extent of "memory management" that i do in my rhi. The last time i had to think about different heap types or how to bind memory was approximately never.


No, it's not usage flags with extra steps, it's less steps. It's explicitly saying you want device memory without any kind of magical guesswork of what your numerous potential combinations of usage flags may end up giving you. Just one simple device malloc.

Likewise, your claim about UMA makes zero sense. Device malloc gets you a pointer or handle to device memory, UMA has zero relation to that. The result can be unified, but there is no need for it to be.

Yeah, OpenGL does not do malloc. I'm flexible, I don't necessarily need malloc. What I want is a trivial way to allocate device memory, and Vulkan and VMA don't do that. OpenGL is also not the best example since it also uses usage flags in some cases, it's just a little less terrible than Vulkan when it comes to texture memory.

I find it fascinating how you're giving a bad VMA example and passing that of as exemplary. Like, why is there gpu-only and device-local. That vma alloc info as a whole is completely pointless because a theoretical vkMalloc should always give me device memory. I'm not going to allocate host memory for my 3d models.


> It's explicitly saying you want device memory

You are also explicitly saying that you want device memory by specifying DEVICE_LOCAL_BIT. There's no difference.

> Likewise, your claim about UMA makes zero sense. Device malloc gets you a pointer or handle to device memory,

It makes zero sense to you because we're talking past each other. I am saying that on systems without UMA you _have_ to care where your resources live. You _have_ to be able to allocate both on host and device.

> Like, why is there gpu-only and device-local.

Because there's such a thing as accessing GPU memory from the host. Hence, you _have_ to specify explicitly that no, only the GPU will try to access this GPU-local memory. And if you request host-visible GPU-local memory, you might not get more than around 256 megs unless your target system has ReBAR.

> a theoretical vkMalloc should always give me device memory.

No, because if that's the only way to allocate memory, how are you going to allocate staging buffers for the CPU to write to? In general, you can't give the copy engine a random host pointer and have it go to town. So, okay now we're back to vkDeviceMalloc and vkHostMalloc. But wait, there's this whole thing about device-local and host visible, so should we add another function? What about write-combined memory? Cache coherency? This is how you end up with a zillion flags.

This is the reason I keep bringing UMA up but you keep brushing it off.


> You are also explicitly saying that you want device memory by specifying DEVICE_LOCAL_BIT. There's no difference.

There is. One is a simple malloc call, the other uses arguments with numerous combinations of usage flags which all end up doing exactly the same, so why do thy even exist.

> You _have_ to be able to allocate both on host and device.

cuMemAlloc and cuMemAllocHost, as mentioned before.

> Because there's such a thing as accessing GPU memory from the host

Never had the need for that, just cuMemcpyHtoD and DtoH the data. Of course host-mapped device memory can continue to exist as a separate, more cumbersome API. The 256MB limit is cute but apparently not relevant im Cuda where I've been memcpying buffers with GBs in size between host and device for years.

> No, because if that's the only way to allocate memory, how are you going to allocate staging buffers for the CPU to write to?

With the mallocHost counterpart.

cuMemAllocHost, so a theoretic vkMallocHost, gives you pinned host memory where you can prep data before sending it to device with cuMemcpyHtoD.

> This is how you end up with a zillion flags.

Apparently only if you insist on mapped/host-visible memory. This and usage flags never ever come up in Cuda where you just write to the host buffer and memcpy when done.

> This is the reason I keep bringing UMA up but you keep brushing it off.

Yes I think I now get why keep bringing up UMA - because you want to directly access buffers between host or device via pointers. That's great, but I don't have the need for that and I wouldn't trust the performance behaviour of that approach. I'll stick with memcpy which is fast, simple, has fairly clear performance behaviours and requires none of the nonsense you insist on being necessary. But what I want isn't either this or that approach, I want the simple approach in addition what exists now, so we can both have our cakes.


What exactly is the difference between these?

cuMemAlloc -> vmaAllocate + VMA_MEMORY_USAGE_GPU_ONLY

cuMemAllocHost -> vmaAllocate + VMA_MEMORY_USAGE_CPU_ONLY

It seems like the functionality is the same, just the memory usage is implicit in cuMemAlloc instead of being typed out? If it's that big of a deal write a wrapper function and be done with it?

Usage flags never come up in CUDA because everything is just a bag-of-bytes buffer. Vulkan needs to deal with render targets and textures too which historically had to be placed in special memory regions, and are still accessed through big blocks of fixed function hardware that are very much still relevant. And each of the ~6 different GPU vendors across 10+ years of generational iterations does this all differently and has different memory architectures and performance cliffs.

It's cumbersome, but can also be wrapped (i.e. VMA). Who cares if the "easy mode" comes in vulkan.h or vma.h, someone's got to implement it anyway. At least if it's in vma.h I can fix issues, unlike if we trusted all the vendors to do it right (they wont).


> and are still accessed through big blocks of fixed function hardware that are very much still relevant

But is it relevant for malloc? Everthing is put into the same physical device memory, so what difference would the usage flag make? Specialized texture fetching and caching hardware would come into play anyway when you start fetching texels via samplers.

> It seems like the functionality is the same, just the memory usage is implicit in cuMemAlloc instead of being typed out? If it's that big of a deal write a wrapper function and be done with it?

The main reason I did not even give VMA a chance is the github example that does in 7 lines what Cuda would do in 2. You now say it's not too bad, but that's not reflected in the very first VMA examples.


> I want the simple approach in addition what exists now, so we can both have our cakes.

The simple approach can be implemented on top of what Vulkan exposes currently.

In fact, it takes only a few lines to wrap that VMA snippet above and you never have to stare at those pesky structs again!

But Vulkan the API can't afford to be "like CUDA" because Vulkan is not a compute API for Nvidia GPUs. It has to balance a lot of things, that's the main reason it's so un-ergonomic (that's not to say there were no bad decisions made. Renderpasses were always a bad idea.)


> In fact, it takes only a few lines to wrap that VMA snippet above and you never have to stare at those pesky structs again!

If it were just this issue, perhaps. But there are so many more unnecessary issues that I have no desire to deal with, so I just started software-rasterizing everything in Cuda instead. Which is way easier because Cuda always provides the simple API and makes complexity opt-in.


But what if you want both on a shared memory system?

No problem: Then you provide an optional more complex API that gives you additional control. That's the beautiful thing about Cuda, it has an easy API for the common case that suffices 99% of the time, and additional APIs for the complex case if you really need that. Instead of making you go through the complex API all the time.

WebGPU is kinda meh, a 2010s graphic programmers vision of a modern API. It follows Vulkan 1.0, and while Vulkan is finally getting rid of most of the mess like pipelines, WebGPU went all in. It's surprisingly cumbersome to bind stuff to shaders, and everything is static and has to be hashed&cached, which sucks for streaming/LOD systems. Nowadays you can easily pass arbitrary amounts of buffers and entire scene descriptions via GPU memory pointers to OpenGL, Vulkan, CUDA, etc. with BDA and change them dynamically each frame. But not in WebGPU which does not support BDA und is unlikely to support it anytime soon.

It's also disappointing that OpenGL 4.6, released in 2017, is a decade ahead of WebGPU.


WebGPU has the problem of needing to handle the lowest common denominator (so GLES 3 if not GLES 2 because of low end mobile), and also needing to deal with Apple's refusal to do anything with even a hint of Khronos (hence why no SPIR-V even though literally everything else including DirectX has adopted it)

Web graphics have never and will never be cutting edge, they can't as they have to sit on top of browsers that have to already have those features available to it. It can only ever build on top of something lower level. That's not inherently bad, not everything needs cutting edge, but "it's outdated" is also just inherently going to be always true.


I understand not being cutting-edge. But having a feature-set from 2010 is...not great.

Also, some things could have easily be done different and then be implemented as efficient as a particular backend allows. Like pipelines. Just don't do pipelines at all. A web graphics API does not need them, WebGL worked perfectly fine without them. The WebGPU backends can use them if necessary, or not use them if more modern systems don't require them anymore. But now we're locked-in to a needlessly cumbersome and outdated way of doing things in WebGPU.

Similarly, WebGPU could have done without that static binding mess. Just do something like commandBuffer.draw(shader, vertexBuffer, indexBuffer, texture, ...) and automatically connect the call with the shader arguments, like CUDA does. The backend can then create all that binding nonsense if necessary, or not if a newer backend does not need it anymore.


> WebGL worked perfectly fine without them

Except it didn't. In the GL programming model it's trivial to accidentially leak the wrong granular render state into the next draw call, unless you always reconfigure all states anyway (and in that case PSOs are strictly better, they just include too much state).

The basic idea of immutable state group objects is a good one, Vulkan 1.0 and D3D12 just went too far (while the state group granularity of D3D11 and Metal is just about right).

> Similarly, WebGPU could have done without that static binding mess.

This I agree with, pre-baked BindGroup objects were just a terrible idea right from the start, and AFAIK they are not even strictly necessary when targeting Vulkan 1.0.


There should be a better abstraction to solve the GL state leakage problem than PSOs. We end up with a combinatory explosion of PSOs when some states they abstract are essentially toggling some bits in a GPU register in no way coupled with the rest of the pipeline state.

That abstraction exists in D3D11 and to a lesser extent in Metal via smaller state-group-objects (for instance D3D11 splits the rende state into immutable objects for rasterizer-state, depth-stencil-state, blend-state and (vertex-)input-layout-state (not even needed anymore with vertex pulling).

Even if those state group objects don't match the underlying hardware directly they still reign in the combinatorial explosion dramatically and are more robust than the GL-style state soup.

AFAIK the main problem is state which needs to be compiled into the shader on some GPUs while other GPUs only have fixed-function hardware for the same state (for instance blend state).


> Except it didn't. In the GL programming model it's trivial to accidentially leak the wrong granular render state into the next draw call

This is where I think Vulkan and WebGPU are chasing the wrong goal: To make draw calls faster. What's even faster, however, is making fewer draw calls and that's something graphics devs can easily do when you provide them with tools like multi-draw. Preferably multi-draw that allows multiple different buffers. Doing so will naturally reduce costly state changes with little effort.


Agreed, this is the console approach with command buffers that get DMAed, and having more code on the GPU side.

Many people need something in-between heavy frameworks and engines or oppinionated wrappers with questionable support on top of Vulkan; and Vulkan itself. OpenGL served that purpose perfectly, but it's unfortunately abandoned.

Isn't that what the Zink, ANGLE, or GLOVE projects meant to provide? Allow you to program in OpenGL, which is then automatically translated to Vulkan for you.

Those are mostly designed for back porting and not new projects. OpenGL is dead for new projects.

I do all my new projects in OpenGL and Cuda since I'm not smart enough for Vulkan.

> OpenGL is dead for new projects.

Says who? Why?

It looks long term stable to me so I don't see the issue.


DirectX 9 is long term stable so I don't see the issue...

No current gen console supports it. Mac is stuck on OpenGL 4.1 (you can't even compile anything OpenGL on a Mac without hacks). Devices like Android run Vulkan more and more and are sunsetting OpenGLES. No, OpenGL is dead. Vulkan/Metal/NVN/DX12/WebGPU are the current.



It’s also almost 10 years old and not a current gen system.

Owners of Switch 2 beg to differ.

And secondly that doesn't matter when we look at the amount of sold units.

Valve can only dream to sell a quarter of those numbers.


Valve owns the market, it doesn't have to...

Also, if we're talking Switch 2, you have Vulkan support on it so odds are you would choose 2x-10x performance gains over OpenGL. It isn't that powerful. My 6 year old iPhone is on par.


People generally don't realize how much more CPU efficient calls are with Vulkan, DX12 or Metal. And especially on handhelds, the SoC is always balancing its power envelope between the CPU and GPU so the more efficient you are with the CPU, the more power you can dedicate to the GPU.

The aforementioned abstraction layers exist. You had dismissed those as only suitable for backporting. Can you justify that? What exactly is wrong with using a long term stable API whether via the native driver or an abstraction layer?

Edit: By the same logic you could argue that C89 is dead for new projects but that's obviously not true. C89 is eternal and so is OpenGL now that we've got decent hardware independent implementations.


Wasn't it announced last year that it was getting a new mesh shader extension?

I don't see the point of those when I can just directly use OpenGL. Any translation layer typically comes with limitations or issues. Also, I'm not that glued to OpenGL, I do think it's a terrible API, but there just isn't anything better yet. I wanted Vulkan to be something better, but I'm not going to use an API with entirely pointless complexity with zero performance benefits for my use cases.

That is the approach Google is taking, by making Vulkan the only 3D API on Android.

See Android ANGLE on Vulkan roadmap,

https://developer.android.com/games/develop/vulkan/overview#...


Tbh, we should more readily abandon GPU vendors that refuse to go with the times. If we cater to them for too long, they have no reason to adapt.

I had a relatively recent graphics card (5 years old perhaps?). I don't care about 3D or games, or whatever.

So I was sad not to be able to run a text editor (let's be honest, Zed is nice but it's just displaying text). And somehow the non-accelerated version is eating 24 cores. Just for text.

https://github.com/zed-industries/zed/discussions/23623

I ended up buying a new graphics card in the end.

I just wish everyone could get along somehow.


The fact that we need advanced GPU acceleration for a text editor is concerning.

Such is life when built-in laptop displays are now pushing a billion pixels per second, rendering anything on the CPU adds up fast.

Sublime Text spent over a decade tuning their CPU renderer and it still didn't cut it at high resolutions.

https://www.sublimetext.com/blog/articles/hardware-accelerat...


Most of the pixels don't change every second though. Compositors do have damage tracking APIs, so you only need to render that which changed. Scrolling can be mostly offset transforms (browsers do that, they'd be unbearably slow otherwise).

That’s not the slow part. The slow part is moving any data at all to the GPU - doesn’t super matter if it’s a megabyte or a kilobyte. And you need it there anyway, because that’s what the display is attached to.

Now, the situation is that your display is directly attached to a humongously overpowered beefcake of a coprocessor (the GPU), which is hyper-optimized for calculating pixel stuff, and it can do it orders of magnitude faster than you can tell it manually how to update even a single pixel.

Not using it is silly when you look at it that way.


I'm kinda weirded out by the fact that their renderer takes 3ms on a desktop graphics card that is capable of rendering way more demanding 3D scenes in a video game.

Sure, use it. But it very much shouldn't be needed, and if there's a bug keeping you from using it your performance outside video games should still be fine. Your average new frame only changes a couple pixels, and a CPU can copy rectangles at full memory speed.

I have no problem with it squeezing out the last few percent using the GPU.

But look at my CPU charts in the github link upthread. I understand that maybe that's due to the CPU emulating a GPU? But from a thousand feet, that's not viable for a text editor.


Yeah LLVMpipe means it's emulating the GPU path on the CPU, which is really not what you want. What GPU do you have out of interest? You have to go back pretty far to find something which doesn't support Vulkan at all, it's possible that you do have Vulkan but not the feature set Zed currently expects.

It was ASUS GeForce GT710-SL-2GD5 . I see some sources putting at at 2014. That's not _recent_ recent, but it's within the service life I'd expect.

(Finger in the air, I'd expect an editor to work on 20 year old hardware.)

Sold it ages ago. New one (Intel) works fine.

I was running Ubuntu. I forget which version.


> It was ASUS GeForce GT710-SL-2GD5 . I see some sources putting at at 2014. That's not _recent_ recent, but it's within the service life I'd expect.

That's pretty old, the actual architecture debuted in 2012 and Nvidia stopped supporting the official drivers in 2021. Technically it did barely support Vulkan, but with that much legacy baggage it's not really surprising that greenfield Vulkan software doesn't work on it. In any case you should be set for a long time with that new Intel card.

I get where you're coming from that it's just a text editor, but on the other hand what they're doing is optimal for most of their users, and it would be a lot of extra work to also support the long tail of hardware which is almost old enough to vote.


I initially misremembered the age of the card, but it was about that old when I bought it.

My hope was that they would find a higher-level place to modularize the render than llvmpipe, although I agree that was unreasonable technical choice.

Once-in-a-generation technology cliff-edges have to happen. Hopefully not too often. It's just not pleasant being caught on the wrong side of the cliff!

Thanks for the insights.


Text editor developers get bored too!

> we should more readily abandon GPU vendors

This was so much more practical before the market coalesced to just 3 players. Matrox, it's time for your comeback arc! and maybe a desktop pcie packaging for mali?


The market is not just 3 players. These days we have these things called smartphones, and they all include a variety of different graphics cards on them. And even more devices than just those include decently powerful GPUs as well. If you look at the Contributors section of the extension in the post, and look at all the companies involved, you'll have a better idea.

There are still three players in smartphones realistically.

ARM makes their Mali line, which vendors like Mediatek license and puts straight on their chips.

Qualcomm makes their custom Adreno gpus. (Derived from Radeon Mobile). They won't sell it outside snapdragon.

Samsung again licensed Mali from ARM, but in their flagship exynos's they use AMD's gpus. They won't sell it outside exynos.

PowerVR makes gpus that are so outdated with features that Pixel 10 phones can't even run some benchmarks.

And then there's apple.


No. I remember a phone app ( Whatsapp?) doggedly supporting every godforsaken phone, even the nokias with the zillion incompatible Java versions. A developer should go where the customers are.

What does help is an industry accepted benchmark, easily ran by everyone. I remember browser css being all over the place, until that whatsitsname benchmark (with the smiley face) demonstrated which emperors had no clothes. Everyone could surf to the test and check how well their favorite browser did. Scores went up quickly, and today, css is in a lot better shape.


The Acid2 test is the benchmark you’re thinking of, for anyone not aware: acid2.acidtests.org

NVidia says no new gamer GPUs in 2026, and increasing prices through 2030. They're too focused on enterprise AI machines.

I think this is one of the biggest issues here, that the EU is actually forbidding to charge credit card users the transaction fee. On the contrary, it should make it mandatory that card users have to pay the transaction fees themselves. This would automatically force card providers to reduce their fees, because nobody wants to use cards with high fees. It would also get rid of nonsensical cash-back systems.

The EU also regulated the fee to be very low, something like 0.1% of the transaction value, comparable to the implicit costs of handling cash. If it was America–style 5% then it would be a problem, but at 0.1% it's not.

I suspect we are only 5-10 years away until Vulkan is finaly usable. There are so many completely needlessly complex things, or things that should have an easy-path for the common case.

BDA, dynamic rendering and shader objects almost make Vulkan bearable. What's still sorely missing is a single-line device malloc, a default queue that can be used without ever touching the queue family API, and an entirely descriptor-free code path. The latter would involve making the NV bindless extension the standard which simply gives you handles to textures, without making you manage descriptor buffers/sets/heaps. Maybe also put an easy-path for synchronization on that list and making the explicit API optional.

Until then I'll keep enjoying OpenGL 4.6, which already had BDA with c-style pointer syntax in glsl shaders since 2010 (NV_shader_buffer_load), and which allows hassle-free buffer allocation and descriptor-set-free bindless textures.


I use Vulkan on a daily basis. Some examples:

- with DXVK to play games - with llama.cpp to run local LLMs

Vulkan is already everywhere, from games to AI.


Is it, though? Japan and America also have good relations, despite the latter nuking the former twice. Germany and other EU states are close now despite being on opposite sites in two world wars. Some/most countries manage to move on from past conflicts.

GTA II is still my favourite GTA.

Used to play it with my siblings after dinner in multiplayer. I loved getting invisibility and trying to sneak up behind them with a flamethrower.

I still don't understand how Microsoft lets standby remain broken. I can never leave the PC in my bedroom ij standby because it will randomly wake up and blast the coolers.

Probably because the quality of PC BIOS/firmware is generally abysmal and getting vendors to follow spec is like herding cats.

This particular issue really hits a nerve.

Consumers _do not care_ if it is the firmware or Windows.

Dell was one of the earlier brands, and biggest, to suffer these standby problems. Dell has blamed MS and MS has blamed Dell, and neither has been in any hurry to resolve the issues.

I still can't put my laptop in my backpack without shutting it down, and as a hybrid worker, having to tear down and spin up my application context every other day is not productive.


Yeah I hear you. One of the reasons I’m still inclined towards Mac laptops for “daily drivers” is precisely because it’s disruptive to have to do a full shutdown that obliterates my whole workspace. Other manufacturers can be fine for single-use machines (e.g. a study laptop that only ever has Anki and maybe a browser and music app open), but every step beyond that brings increased friction.

Maybe the most tragic part is that this drags down Linux and plagues it with these hardware rooted sleep issues too.


S3 sleep was a solved problem until Microsoft decided that your laptop must download ads^Wsuggestions in the background and deprecated it. On firmwares still supporting S3, it works perfectly.

Sleep used to work perfectly fine up until, I don't know, 10 years ago. I doubt hardware/firmware/BIOS got worse since then, this is 100% a Microsoft problem.

Sadly even if Microsoft had a few lineups of laptops that they'd use internally and recommend, companies would still get the shitty ones, if it saves them $10 per device.

Why woul building from source be safer? Are you veting every single line of third-party source code you compile and use?

You're sure not vetting any byte of an executable, so building from source is safer.

Binaries or source, it's pretty much the same unless you thoroughly vet the entire source code. Malicious code isn't advertised and commented and found by looking at a couple of functions. It's carefully hidden and obfuscated.

That's

However much the code is hidden and obfuscated, some parts of the source code are going to be looked upon.

For a binary, none, ever, except in the extremely rare case that someone disassembles and analyzes one version of it.

The fact that open-source doesn't coincide with security doesn't mean that it isn't beneficial to security.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: