Hacker Newsnew | past | comments | ask | show | jobs | submit | ehmmm's commentslogin

Most of them are burdened by graphics. It sells better.


On the contrary, a lot of the 2d indie darlings have terrible graphics, they're burdened with hubris ("it can't be that hard to make pixel art") and nostalgia.


On the contrary, what? Who said anything about indie games specifically?

This sub-thread is talking about modern games, all of them. My comment still stands. Games that make most of the market share, are burdened by graphics.

Ori is a good example that does that. It's an indie game with a major publisher and was profitable within the first week. Its graphics are turned up to 11.


Have you heard of the phenomenon of Minecraft?


Dead ends are there for a reason to create a sense of a larger world. They are usually blocked by some logical obstacle, and rarely actually lead to a short alternate area, but not an alternate path. Half-Life is linear by design, surely you must agree that a linear path that tries to create a bigger world is better than one without. Those paths perhaps broke the illusion for you, but at least there was some illusion and without the paths there would be none.

I agree with the first paragraph, the first game was better since it didn't constantly trap you in rooms, where you had to wait for dialogue to play out, which were just glorified cut-scenes. For example, in the first game if a friendly character tried to convey some dialogue you could ignore/kill/avoid him, but in the second game you can only wait.


There were plenty of situations in the first game where you had to wait for a character to finish his dialog and open a door or something so you can progress.


Yes. And if I remember correctly, if you bomb the block, the super missile logo appears on the block.


Those mechanics are way too subtle. Today, the player must be explicitly told how to overcome obstacles. Games have become mainstream and so did the target audience, this caused a push towards the lowest common denominator in game design.


It could also be the case that game designers have gotten lazier, as the purpose of mainstream games shifts from challenging players to encouraging in-game purchases and DLC, and explicit tutorials take a lot less effort when all you really want to do is milk the casuals until they get bored and wander off to the next thing.


From the second paragraph:

"This analysis takes most of its material from the first playthrough of the game by my friend Rufus, which I had the pleasure of observing from beginning to end. Watching him, a complete newcomer to the genre, still find his way around Zebes in pretty much the same way I'd do, almost never once getting lost or stuck for any considerable amount of time, made me question how that could be. This analysis is my answer."

I presume this happened recently, with a gamer of "today".


Your logical fallacy is, drum roll... : Hasty generalization

Congratulations! You win nothing.


The popularity of the opaque and difficult Dark Souls/Bloodborne games proves that there is space for games that challenge the player.

What's different now than the era of Super Metroid is that there are vibrant communities where people can deeply discuss the mysteries of games and help fellow gamers discover how to move forward.


The key difference to this classic design style is that the Souls games don't really have the "invisible hand" guiding them in the right direction. It's almost the opposite.

Almost everyone who played Dark Souls probably has a story about heading in the wrong direction from the first hub world, and ending up in one of two locations that are almost impossible for a newbie (typically the skeletons) rather than the intended "slightly harder than the tutorial" area. The only thing that varies is how long they spent hopelessly beating their head against the wall before discovering the intended path.

In Bloodborne that's been replaced by half the players new to the series not realizing they can equip a weapon, and consequently spending hours trying to defeat the (rather tough) first enemy barehanded.


I absolutely adore the mental image of Dark Souls using its invisible hand to push players off of cliffs. :) It's a nifty demonstration of differing philosophies: Super Metroid wants to guide the player to minimize frustration, so that they get the satisfaction of non-linear exploration while avoiding the boredom that comes from fruitlessly trekking through the same place over and over and over and over again. Meanwhile, Dark Souls doesn't care a whit about frustration, and aims to give players a feeling of deserved triumph when they finally manage to surmount the previously insurmountable. I think there are room for games of both philosophies to exist, though they will surely appeal to different tastes.


People who want games like this have left that particular market. The players haven't changed, it the group that's willing to spend the most money that has.


I think one big difference is that we have the internet now, with wikis, Game FAQs, youtube walkthroughs, etc. I played Super Metroid using a walkthrough, and that completely breaks the illusion of exploration, because it's then clear to the player that the game is linear and there is always only one way you can go. (Metroid 1 holds up better in that situation, because it actually gives the player genuine choice about what order to do the different challenges).

The most popular game right now is probably Minecraft. Note how it exactly takes advantage of this environment: in order to have fun in it, you want to read tutorials about how to build various stuff.


there is such a wide variety of games available today that any sweeping statements like yours are by necessity untrue. but even if your thesis were correct, your conclusion is unsupported and honestly nothing more than insulting.


I don't necessarily agree with parent, but see Gran Turismo for a game that got a lot easier, with more content, as the series developed.

I'd love to see someone comparing the licence tests in each game.


MAD


Such statements are not legally binding and thus completely irrelevant in a lawsuit.


Promises on which another party reasonably relies can be legally significant even if not made in a binding contract. See, particularly, the doctrine of promissory estoppel.


But Google didn't make any promises to anyone specifically. They just made a statement they will not actively sue.

How will you justify that you experienced negative consequences because Google decided to defend their own patents, by actively suing.

I guess it is complicated...


1. The current version of the library contains a profound design flaw - it allocates memory.

What do you mean by that?, or could you elaborate how will you design it to not allocate any heap memory?


The design of memory access behaviour is absolutely paramount in high performance software. Such software has a great deal of time and effort invested in the design to minimize memory accesses and, where they must occur, to make them cache friendly - for example, ensuring the buffer used for something being read from is allocated next to the buffer being used to write to, so they will both be covered by a single TLB lookup. This requires complete control over memory allocation. A library which not only makes its own allocations but even more staggeringly at run-time, is beyond the pale - absolutely unusable.

The next release performs no memory allocation; all allocation is performed by the user and passed into the function calls as pointers to those allocations.

Users can still of course perform run-time allocations if they wish, but now they can also fully pre-allocate, in ways which are memory access friendly given the work being performed by their software.


Interesting choice. One is still able to add a module that wraps the calls with a custom allocator.


It is so close to full C11 support.

I think just threads.h are missing now.


I haven't really heard much buzz about C11 since its release. I don't mean to sound snarky (I'm really curious): does anyone really use C11 at all?


I use C++11 and it's awesome. It's expressive like python but still maintains strict type safety. I built a (still young) curl wrapper using C++11: https://github.com/whoshuu/cpr

Edit: Misread the parent posts and thought they were referring to C++11. Today I learned C11 is a separate standard.


Isn't that C++11?


Nice looking project but unfortunately it is GPL. Unlike Curl it self because it uses MIT like license.


Too bad proprietary projects can't link against it. A loss for us all.

Seriously? GPL stuff can be used with any internal stuff, and almost any Free stuff. Isn't that enough? Where is the loss exactly?

Edit: okay, that wasn't the author's intention, and the consistency argument is a good one (least astonishment and all that). Still, the general argument holds.


Excellent point, I just changed the license to MIT to be consistent with curl. Thanks for catching that.


that looks like a c++ project that uses c++11 not c11


As an embedded programmer, I have to say that C11 atomics are a much better fit for device register access than volatile. Unfortunately, since many commercial C compilers don't even support C99 fully, I don't expect to see C11 in common use anytime soon.


Musl libc has a c11 threads implementation.


One her comments:

well, if its over 100 mSv/h, then acute radiation sickness begins after ~2 hours (noticable reduced blood cell count from 200 mSv acute dose onward, though you'd only FEEL sick after another 3 or 4 hours). but yeah, as the inverse square law applies, there's nothing to fear with this particle. and yep, the spectrum was done by a HPGe, as usual. :)


...and it was maxing out her meeter, which I think was showing 100 mSv/h, wasn't it?


Yes, but radiation sickness is caused by prolonged exposure - that is, 100mSv/h for two hours (dimensions are important here, dosage is in Sv). Additionally that's for a full body dose and she was mostly irradiating her fingers.


ahhh, Thankyou.

I thought it meant that if you're exposed to 100mSv/h, you'll get sick around 2 hours later... but what it means is a full body dose for 2 hours is enough to make you sick.

Therefore 5 minutes to the fingers is probably OK.


I think the key was if that was the dose to the total body area not just from a single grain in her hand. Imagine being covered in those grains for 1 hour that would be the equivalent.


And what happens if you "only" irradiate your hand? Presumably some of the alpha/beta particles can go into your blood and then to the rest of your body?

what damage is being done by holding that grain?


Alpha particles (high energy bare Helium nuclei actually) and beta particles (high energy electrons actually) don't "go into your blood", they are just absorbed by atoms in your skin and they break a few molecules those atoms were part of along the way. If they go deep enough they make break up some DNA in you skin cells. If they would get to the blood they would just be absorbed by the blood itself and break some molecules that float in the blood - which would be pretty harmless. If the dose is really big they may even turn some atoms in your skin's cell radioactive and yeah, these could then go around your body, but at such a dose you'd have burns on your skin already.

Her biggest problem from holding something like that radioactive grain in hand would be the gamma rays causing mutations in the cells in her hand that would then lead to cancer in the log term. Like I'd imagine she will be sick with some bone cancer or leukemia or a random sarcoma in like 10 - 15 years from now. Maybe even longer if she has inherited some good DNA-repair genes and anti-cancer-immunity genes. Or sooner if she wasn't so lucky at the genetic lottery and if she has other risk factors too...

And then there's the radioactive dust... I'd imagine she has inhaled quite a lot of it along her trips, and this is probably her biggest short term (think "less than 10 years") concern.


It's really only neutrons that can transmute other elements into radioactive ones. Charged particle or EM radiation might damage your body's chemicals but they won't harm your atoms.

Generally a whole body dose of 1 sievert will increase your odds of getting cancer by 10% over your life. I don't know how much she's been exposed to but while my guess is that while future cancer is a concern it really isn't a sure thing.


> Charged particle or EM radiation might damage your body's chemicals

Which is probably the biggest problem for biological systems - alterations in the DNA caused by radioactivity.


The gigacounter predicts the sieverts/h based on the local amount of radiation and by assuming that is the dosage all parts of your body are exposed to. If significantly less of your body is actually exposed to that level of radiation the actual dosage you receive will be less than predicted.


You're right but it's not about the dose only. If I blast a millimeter-sized piece of your finger with neutrons the body dose might be extremely small, but that piece of finger tissue might itself become radioactive and the newly-made-radioactive atoms in it will keep irradiating the cells around. Then some of them will get cancer mutations. Then some of them will go long enough along the path to cancer. Then some of them will metastasize...

The girl seems to have a good understanding of dose safety. But a poor understanding of biological effects, just like everyone else. Beta vs alpha vs gamma radiation have very different biological effects and sieverts, the measuring units that take some of these differences into account, are a gross over simplification. You can kill someone with a radiation dose well below the lethal one in Sv and you can survive a dose above the lethal one...

Biological systems have very complicated relationships with radiation in general, that's why even the health impact of non-ionizing radiation needs to be investigated (yeah, your cellphone is most likely safe ;) ...but you know that you have at least some cells in you body that respond even to non-ionizing EM radiation - they are in your eyes and thanks to them you can read this :) )


High Radiation doses to small areas are a really complicated issue. One example of a greater than lethal dose in 1978 to a person’s head where mostly it was just a question of direct tissue damage.

http://en.wikipedia.org/wiki/Anatoli_Bugorski Note: You may want to avoid doing an image search though.

More generally in the short term it's fast growing tissue that's most at risk. So, exposure that is quickly lethal may not do a lot of damage to slow growing tissues. Long term you can often remove severely affected tissues so a person can often survive much higher doses to their arms and legs vs torso.

PS: Another issue relates to population sizes. A 0.1% risk to everyone in the US kills more than 300 thousand people and is a major issue. A 0.1% risk to one person may not seem that bad as that's around the default risk every month for a 55 year old male.


no, alpha particles cannot pass through skin. beta might


Then of course, if some speck of radioactive material gets lodged in your body the inverse square law works against you.


There is no pointer arithmetic in the line:

    struct usb_line6 *line6 = &podhd->line6;
Offset is calculated "below the hood". Pointer arithmetic is defined with arithmetic operands: +,-

-> and . are not arithmetic operands.


You sound very sure of yourself.

If you look at the definition of the -> and [], you might learn something!


Clearly, you don't have anything constructive to say, so you have to resort to ad hominem.

[] operator doesn't have any place in this debate.


Well, as far as I understand it, `->` works in a similar fashion to `[]`; the difference being that one behaves on arrays while the other on structs.

What @jwatte tried to say is that `->` works like this:

    // foo is struct thingity*
    &foo->bar == foo + offsetof(struct thingity, bar);
The `[]` does a similar thing:

    // s is char*
    *(s+i) == s[i]


Sure, &foo->bar looks like foo + offsetof(struct thingity, bar), but there is no pointer arithmetic involved since C doesn't specify how the member access ( -> or . ) is actually calculated.

It does however specify that [] operator is equivalent to pointer arithmetic. But we are talking ->,. operators and not [].


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: