Hacker Newsnew | past | comments | ask | show | jobs | submit | oflordal's commentslogin

You can do that on both HIP and cuda through e.g. hipHostMalloc and the cuda equivalent (Not officially supported on the AMD APUs but works in practice). With a discrete GPU the GPU will access memory across PCIe but on an APU it will go full speed to RAM as far as I can tell.


Did anyone evaluate this compared to using api access through an external gui (i.e. continue.dev). For software dev did the cost end up higher? I am thinking this is can be more convenient (and I suppose engineers can more easily use it outside work as a perk). Given practical use across a team will vary you get a lower price when using api and perhaps additional opportunity for scripted use.


Any company that are not allowed to share code with third parties either by internal or customer policy. This should be a lot of companies.


Or just not wanting to live in a surveillance-first, service-first world.


ARM big little (which the M1 is at least somewhat based on) was announced the year before the prediction.


No, this is about HW architectures. While they are likely evolving towards one a other there are tile based (like Imagination and ARM Mali) And immediate mode (Nvidia AMD) that both implement the same APIs (OpenGL, Vulkan etc). All these HW architectures are modern and in use.


Basically all modern GPU architectures implement tiled rasterization. NVIDIA has been doing it since Maxwell (2014) and AMD has been doing it since Vega (2017). Even Intel has been doing it for a few years now starting with their Gen 11 (2019) GPUs.


Those are going to require some serious citations. I'm quite sure most desktop GPUs don't run as tiled renderers at least under normal circumstances.


> Specifically, Maxwell and Pascal use tile-based immediate-mode rasterizers that buffer pixel output, instead of conventional full-screen immediate-mode rasterizers.

https://www.realworldtech.com/tile-based-rasterization-nvidi...

He describes it as "tile-based immediate mode" in the article and the video should go into more detail about it. It's been a while since I watched it.


The parent article already discusses that article, saying those GPUs don't use TBR in areas where the primitive count is too high or something:

> Another class of hybrid architecture is one that is often referred to as tile-based immediate-mode rendering. As dissected in this article[1], this hybrid architecture is used since NVIDIA’s Maxwell GPUs. Does that mean that this architecture is like a TBR one, or that it shares all benefits of both worlds? Well, not really…

What the article and the video fails to show is what happens when you increase the primitive count. Guillemot’s test application doesn’t support large primitive counts, but the effect is already visible if we crank up both the primitive and attribute count. After a certain threshold it can be noted that not all primitives are rasterized within a tile before the GPU starts rasterizing the next tile, thus we’re clearly not talking about a traditional TBR architecture.

[1] https://www.realworldtech.com/tile-based-rasterization-nvidi...


Classic TBDRs typically require multiple passes on tiles with large primitive counts as well. Each tile's buffer containing binned geometry generally has a max size, with multiple passes required if that buffer size is exceeded.


Yeah, please see https://news.ycombinator.com/item?id=27898421

Having watched the video, I'm fairly certain what is being observed is not really tiled.

I'm not however sure what a "tile-based immediate-mode rasterizers that buffer pixel output", but I think that's enough qualifications to make it somewhat meaningless. All modern gpu's dispatch thread groups that could look like "tiles" and have plenty of buffers, likely including buffers between fragment output, and render target output/color blending, But that doesn't make it a tiled/deferred renderer.


Section 5.2 of Intel's Gen11 architecture manual [1]

(yes, PTBR is only enabled on passes the driver thinks will benefit from it)

[1] https://software.intel.com/content/dam/develop/external/us/e...


AMD has even talked publicly about how their rasterizer can run in a TBDR mode that they call DSBR.

https://pcper.com/2017/01/amd-vega-gpu-architecture-preview-...


Interestingly, Nvidia has been using tile based rasterizers for a bit too. https://www.techpowerup.com/231129/on-nvidias-tile-based-ren...


It's been often quoted that Nvidia has switched to tile based for their Desktop renderers, but I haven't seen a source that confirms this. I suspect this is speculation due to changes in raster order that produce side-effects that look tiled even though they aren't.


This has been empirically tested on multiple occasions. There is an article on realwordtechnologies discussing this, and the results have been related for newer AMD GPUs as well. I have a little tool for macOS that tests these things out, and the Navi GPU on my MacBook is definitely a tiler (the Gen10 Intel GPU is not).


It's brought up in multiple other comments, so I won't bother going into detail, but the empirical testing, is flawed and is actually measuring changes in other details about thread launch behavior.


It is as much or more about saving power by reducing costly traffic to the external RAM. For the pathological cases where you don't compress you simple don't save power, on average you do.


Good point. I hadn't imagined that particular case.


If you think it is fun, having a hobby (mturk) that is a small net positive can save you a lot of money compared to having a hobby which is a net loss (most hobbies).


> If you think it is fun [...] (mturk)

If this was generally perceived as fun, then Amazon would charge money for you to "play" mturk. The fact mturk work pays tells me that mturk is pretty much nobody's hobby.


The "fun" may be to earn a little extra money doing menial tasks. Just the other day there was a discussion here about EVE online and how much menial management is needed in the game to set up for a small amount of fun PvP combat time. What's the difference between that and doing some mindless menial tasks for a couple extra bucks to go out on the weekend?


> The "fun" may be to earn a little extra money doing menial tasks.

That the pay is fun (or enables fun) does not make the work fun. Here mturk was named a potential "hobby", that's just ridiculous to me.

> Just the other day there was a discussion here about EVE online and how much menial management is needed in the game to set up for a small amount of fun PvP combat time. What's the difference between that and doing some mindless menial tasks for a couple extra bucks to go out on the weekend?

You could argue that EVE setup is "work" required to enable some "fun" play. I don't think anyone would consider EVE setup (just the setup) as a hobby.


Some people's "hobby" is couponing. I doubt they consider the act of digging through papers to find and cut out coupons to be fun, but the act as a whole is. But I guess if you define a hobby strictly as something you do not for pay, you're correct. If you define it more broadly as something you do during non-work time, then it fits.


I don't think you can separate the setup from the work so easily. I highly doubt if you just had the "fun" play then it'd be a successful game. There are plenty of action packed alternatives if you just want the battle part all the time.

People like grinding. Just not when they have to accept that they're grinding for the sake of grinding. There needs to be an upcoming battle or achievement or texture pack unlock or a few extra bucks in the bank in order for them to get their "I'm being productive" kick every time they complete one of their many short, simple, well defined grind tasks. Just a continuous stream of little wins.


Different strokes for different folks


Yes. Though theme parks change at the door yet software shops pay programmers to come in each day.

There is clearly a patter here.


Programming is fun for programmers, we don't get charged to play with computers. :)


Programming is fun for 15 minutes out of each hour, the other 45 are pretty frustrating.


I think I know what you're getting at. Where I work, you could almost set your watch on the almost-once-per-minute frequency of someone yelling "fuck" or "shit" because they're annoyed at something in code (I'm no exception to this), but I think it's safe to say that the 45 minutes of frustration are worth it for the 15 minutes of "I'm the coolest person ever!".


> Programming is fun for programmers

Bull. If this was true trey would line up to join our company and I could charge them at the door for working with us.

Some programming is fun. Sure. Many part of my work are fun. But I'd rather do other things with my life if money was not an issue.


I agree. I think I have a pretty decent job, but if they stopped paying me I wouldn't show up "just for fun".

Programming on stuff I care about is fun, and for some beautiful moments at work there is overlap of "stuff I care about" and "stuff that is valuable to the business", but most of the time the work is "meh" and I mostly view it as "this is much better to me than most other jobs would be for me".


Why would they line up to work for you of all people? They can work for anyone they want, it’s somewhat of a seller’s market when it comes to programming.

I imagine just about nothing is 100% fun 100% of the time. If many parts of your work are fun, it’s fair to say your work is fun. If you’d rather be doing something else, you could try that, too. Maybe you’d like it more.


Maybe not all programmers, and maybe not all tasks...

But when I compare programming to RPGs, I feel the same way. In both of them, I spend a lot of effort supporting the really fun parts with work. I enjoy both, and I keep doing both. But RPGs have a ton of grind that I generally don't like except that it enables me to get to the fun stuff.

Most of my hobbies are like this. There are things I have to do in prep for the fun stuff, or after the fun stuff to finish the project. And I do them to get back to the fun again.


What if it’s fun, but they can program without you? Then they might choose to have fun with the highest bidder.

I would program even if money was no issue. There are many who would.


Me too, but not 30+ hours per week. And on something of my choosing. And...

Then programming usually does not come by itself: meetings, company politics, little mgmt, all usually come with it when programming professionally. Many of those tasks I consider less fun than the programming itself.


Supposedly the step in the middle is SW optimizations?


This, 1000%. Especially floating point arithmetic. Compilers aren't that great at vectorization (or more accurately, people aren't that great at writing algorithms that can be vectorized), and scalar operations on x86_64 have the same latency as their vectorized counterparts. When you account for how smart the pipelining/out-of-order engines on x86_64 CPUs are, even with additional/redundant arithmetic you can achieve >4x throughput for the same algorithm.

Audio is one of the big areas where we can see huge gains, and I think philosophies about optimization are changing. That said, there are myths out there like "recursive filters can't be vectorized" that need to be dispelled.


It has gotten very tricky on x86 since using the really wide SIMD instructions can more than halve your clock speed for ALL instructions running on that particular core.


iirc that's only with some processors using AVX2/512 instructions, which are still faster than scalar (depending on what you're doing around the math).

Things being tricky with AVX and cache-weirdness doesn't change the fact that if you're not vectorizing your arithmetic you're losing performance.

There's also an argument to be made if you're writing performance critical code you shouldn't optimize it for mid/bottom end hardware, but not everyone agrees.


This is obviously a nice gesture by Nvidia but I always wonder how can it be legal to have different amounts for father/mothers? Are there no laws on discrimination based on sex in the US?


It's not different for fathers and mothers, it's different for "person giving birth" and everyone else (including adoptive/foster moms).


Discrimination is a non-issue here. The mother undergoes a medical procedure / event (giving birth). The father does not.

So the one giving birth gets 22 weeks. The father gets 12 weeks. The mother of adoptive kids gets 12 weeks.

I see no issue here. There are differences between the sexes and catering / supporting those differences doesn't automatically mean discrimination.


> how can it be legal to have different amounts for father/mothers? Are there no laws on discrimination based on sex in the US?

Well, the law does recognize that the person who just delivered a baby went through a lot harder time than the one who simply contributed the other half of the genetic material 9 months before.

After my wife gave birth, she breastfed every 3 hours (typical for a newborn) so that means very little sleep. Recovery from having pushed another human being through the birth canal (or from having the abdomen cut wide open) is not trivial.

Then there's the baby's bonding, which is primarily with the mom in the first few months.

Paternity leave is a great thing; as a dad I got to support my wife in child with all the household stuff. And there is not replacement for witnessing the first few months of one's child's life. That goes by so fast.

But as a dad, I recognize that the largest burden was born by my wife. Many trivialize the effort women go through but I think they are the unsung heros of humanity.


I'm a guy who just had a baby. I'm a pretty hands-on father. However, I have no problem for birth-givers (i.e. biological mom) getting more leave than men. In fact, it is a shame how bad mat leave is in most places. Even in Canada, it isn't full income replacement for a year .. there are a bunch of caveats.


It's a political 3rd rail. There are laws about discrimination based on sex in the US, but you generally get a lot of flack for pointing out the unfairness if it benefits women, which is mostly considered ok since it's making up for the many years that the discrimination went the other way (and I'm actually ok with that).

And to be fair, it's a hell of a lot harder on a birth mother than anyone else. You'll notice adoptive mothers in this policy only get the 12 week benefit too, so this one is already fairer than most.


Your use of "unfairness" sticks out to me, considering you go on to say you're OK with it. Are you actually OK with the disparity or do you think it's unfairness? Can't really be both.


Unfair can simply mean not the same. Dunno about GP but this is a pretty popular sentiment. Look at e.g. GitHub saying they won't consider racial or sexist behaviour, depending on who is the offender. Look at recent popular protests in the US, where leaders organize people on skin colour.


My wife had to take unpaid FMLA. I got 6 weeks paid. They even noticed she took 3 days too long and made a big deal out of it requiring her to come back exactly then end of the allowed FMLA time. This is still a step in the right direction. Plus giving birth is a little bit more difficult than my job as a dad.


I haven't seen any men breastfeeding lately. Nor do fathers regularly go through post natal depression or suffer the other ailments of pregnancy. I suggest you talk to some mothers (or your own) before making such absurdly informed assertions.


Actually, post natal depression also affects men.


Not trying to imply that it doesn't. It is a stressful situation for everyone. But the rates in women are significantly higher. (Ironically being a stay at home mum can be isolating and it might be better for her health to go back to work...) One analysis also found a small be definite correlation between a mother being depressed and also the father. The incidence varies strongly depending on risk factors (smoking, obesity, poverty, trauma), form 10-25% in women, and about 5-10% in men (though men tend to under self report..).

Small study: http://bjp.rcpsych.org/content/169/1/30.short Meta study: http://jama.jamanetwork.com/article.aspx?articleid=185905


I'd just be happy to get anything. When my wife gave birth I was forced to use my entire vacation time.


There are games and experiences that people seems to think are interesting enough to play on the GearVR. The PS4 will be more powerful than the phones powering that. Obviously you have to adjust your expectations compared to the Rift but it seems like there can be value for VR platforms that are not as fast as the Rift/Vive minimum specifications.


That's a bit of an exaggeration regarding the GearVR. There are interesting games, sure. Just not interesting enough to hold your attention for more than 10 minutes or justify spending so much money on VR. If PSVR offers the same level or experience, then I think this will be the last generation of VR.


And it's important to recall that VR works best with >=90fps which requires more horse power.


You do realize the PSVR is 120fps, the PS4 as-is will drive that if the game allows, otherwise is drops back to 90.


My comment was in reference to GearVR.


What do you mean? VR can't be a cinematic experience?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: