I'd encourage you to look into Cognitive Load Theory[0].
A key finding is that people are not creative generally but are instead creative relative to their skills.
e.g. if you are not a skilled programmer your programming is very unlikely to be creative except insofar as it relates to a non-programming skill that you have acquired.
This suggests that most creative solutions will come from people who learn many things. You'd expect people with high IQ and conscientiousness (implies being organized and driven) to be the most creative group. This lines up with the best evidence I've found, and anecdotally matched my experiences as an educator and continues to as a software developer.
I think this may also explain your own experiences. The model predicts that a smart person who learns a lot less about a topic than their less smart peer would be less creative thinking the topic.
Slightly adjacent to the article, I don't think Apple cares all that much about the AR metaverse/ecosystem.
My own prediction, using many of the same data points as the author, is that Apple is trying to create a suite of features that act as a sort of Software Personal Assistant. For years Apple has consistently put in better sensors and larger TPUs than strictly necessary for the expected lifetime of the device. We're already seeing some of the results of this with the Health app.
Apple understands the profit potential of platforms, so they'll make some of the data that enables these features to App Developers and that in turn may enable AR, but I doubt any Apple executives are seriously focused on bringing AR capabilities to developers.
I hope you're wrong. I understand that the technology for the dream that "Google Glasses" was selling isn't quite there, but I hope Apple has a research team moving closer and closer to it.
Non-gaming AR tech will have to be extremely good out of the gate to dodge the issues that sank Google Glasses. I suspect it will remain an active research project for as long as it takes, and in the meantime Apple will chart their product roadmap along a sequence of technologies that they can be confident of having ready on 1-5 year timescales.
They may still make that, but I suspect developers will not have full access to the full power of the platform.
A thought I should have fleshed out in the above post: the kind of sensor data that enables AR can also be used to deduce a great deal of personal information[0]. With Apple taking a strong position on privacy I suspect they will make only a limited subset of data available to developers.
"Not quite there" is an understatement. The advancements necessary in optics, battery technology and GPU performance for a practical AR wearable are considerable, this is without considering the need for a fashionable form factor and practical heat dissipation. IMO we're at least a decade out from a serious MVP.
As someone who's produced a large volume of educational content in my career, this idea ends up producing content that even most serious practitioners cannot consume.
The problem is that even amongst "serious" programmers people often have a patchwork of knowledge with small, unpredictable gaps in understanding; many of which will surprise you. You will find these gaps even if you've taken the time to handpick vetted experts. Once you account for this patchwork of knowledge gaps in verified experts you'll often notice that you're really close to a tutorial for relative beginners, which most people are, and just go the full distance.
The problem generalizes to a variety of subjects but is particularly pernicious in programming where small details are often of much greater significance.
I've watched several people go through this conversation after a company made this demand. For this to succeed it seems that you need to be indispensable, or be part of an organized group that is indispensable, to the highest person in your management chain who is demanding a return to the office (probably the CEO). If that person isn't aware you're indispensable, you will have to try to convince them.
> there is a court and criminal justice system for a reason
Having supported a few friends trying to push these kinds of complaints through the courts, I think the precise reason this kind of naming and shaming has become so common place is the criminal justice system isn't working well.
Even for fairly straightforward sexual assault case in a liberal jurisdiction with witnesses I've watched a friend struggle with members of the justice system verbally insulting and degrading them as they try to obtain justice for themselves.
Having seen all that when I see a post like this I understand why the author did not go to court, and don't question it. If we took the time to actually fix the courts I'd be much more skeptical of claims that had not been presented to law enforcement.
I feel for people like your brother who are victims of people abusing the trend, but as long as our justice system fails victims so horribly I think this is the least bad solution available.
Can you please provide proof that our justice system is failing victims? Do you just mean that the conviction rate is anything less than 100% for accusations?
Also, can you clarify what you mean by "verbally insulting and degrading them"? It's possible you just mean the lawyer is accusing them of lying... which is what you do when you think a person is lying. The accused does have a presumption of innocence, and the accuser may need to be cross examined under some reasonable amount of emotional stress to see if their behavior under stress reveals that they are lying. There's not really any way around this other than "well, we'll just assume they're telling the truth and anyone they accuse is guilty" which is a far worse solution in my opinion.
> Can you please provide proof that our justice system is failing victims? Do you just mean that the conviction rate is anything less than 100% for accusations?
I'm confused that you think this is something I can prove through citations. The way we adjudicate whether someone is a victim is through the courts, my claim is the courts do a bad job of this. There are only two things I can think to poinnt you at:
1. The numerous written accounts online of women attempting to get justice and being stonewalled. Some of the more famous cases during the beginning of the #metoo era showed this.
2. I can say that the lived experience of every woman I know to have gone through the courts found it unnecessarily degrading (n~=20) and while I believe all of them, only a quarter (n~=5) received a guilty verdict. I know far more women who did not go through the process due to stories from women they know.
I'm personally convinced, if you're not I understand but am not ready to expend the energy digging up cases to try and convince you.
> can you clarify what you mean by "verbally insulting and degrading them"
Literal slurs, misogynistic generalizations about women being temptresses, stereotypically horrible questions such as "were you asking for it?"
> Can you please provide proof that our justice system is failing victims? Do you just mean that the conviction rate is anything less than 100% for accusations?
Nobody is asking for near 100% conviction rates.
In the UK there are about 150,000 rapes per year. Police record about 60,000 crimes. CPS prosecutes fewer than 5,000 cases. Courts convict fewer than 2,000 people.
Rape is a very serious crime. A less than 2% conviction rate is failing the victims.
Those engineers have the good fortune to be working in a fairly constrained space. New materials and building techniques become viable slowly over time.
Software developers are able to build abstractions out of thin air and put them out into the world incredibly quickly. The value proposition of some of these abstractions are big enough that it enables _other_ value propositions. The result of that is that our "materials" are often new, poorly documented, and poorly understood. Certainly my experience writing software is that I am asked to interact with large abstractions that are only a few years old.
Conversely, when I sit in a meeting with a bunch of very senior mechanical engineers every one of them has memorized all of the relevant properties of every building material they might want to use for some project: steel, concrete, etc. Because it's so static, knowing them is table stakes.
I'd say this difference in changing "materials" is a big source of this discrepancy.
Also, the construction industry is a huge mess, and anyone telling you that things don’t go over budget, get torn out because someone messed up a unit conversion somewhere, burn down during construction because someone didn’t follow some basic rules, or turn out to be nearly uninhabitable once complete because of something that should have been obvious at the beginning - is just ignorant. These happen on a not infrequent basis.
The big difference is no one really tries new things that often in construction, because for the most part people have enough difficulty just making the normal run of the mill stuff work - and people who have the energy to try often end up in jail or bankrupt.
In Software, we’re so young we end up doing mostly new things all the time. Our problems are simple enough and bend to logic enough too, that we usually get away with it.
If you’ve ever poured a footing for a building, then had the slump test fail on the concrete afterwards you’ll sorely be wishing for a mere refactoring of a JavaScript spaghetti codebase under a deadline.
Just as a note, the article specifically lays out that you are only a loser in a pure economic sense, you have not maximized monetary profit and have chosen to prioritize something else. Not an idiot at all :)
I'd be curious to see experiments with variable numbers of line breaks to create additional structure. I find my own code significantly more readable by separating certain "sub-blocks" of code with two line breaks, with their internal structures having single line breaks.
thing_1_a
thing_1_b
thing_1_c
thing_2_a
thing_2_b
It fits into the framework proposed here but is not mentioned specifically as having been investigated.
Not just in Javascript. Since blocks define a scope, visibility and unexpected lifetime issues may occur in other curly-braces languages like C and C++ just as well.
Using blocks like that is something you should be very careful about - might have some surprising effects especially for novice programmers.
The blocks here act exactly the same as any other blocks, like the ones you write for loops and ifs etc. So i wouldn't exactly call the consequences "unexpected".
The problem is, to most people it just looks like you made a mistake. If you want to create breaks in your code, consider adding comments or breaking blocks out into their own functions.
Does it look like I made a mistake to most people? I've never received a comment in a PR about it and I observe other developers using the same technique, so I'm surprised to hear this, do you feel very confident in this assessment? Just trying to gather feedback here.
Maybe not a mistake, but code smell, often, yes. If you add blocks, it's usually because you want to group things together, and avoid scope leakage (if that's an issue, then you're probably doing similar things multiple times). Often a function will be a better fit here.
I think the author's POV still stands just perhaps needs some massaging the framing.
> When people acquire skills, and that makes some tasks easier compared to untrained people, that difference categorically is not subjectivity. It is not subjectivity because we can reproduce the training effect in person after person, and even measure it with some kind of numbers that we can plot on nice graphs and see things like that similar "learning curves" are consistently reproduced in different people.
It sounds like you're referring to convention. Adhering to conventions a reader is familiar with definitely improves readability, but _any_ style can be improved by adherence to convention, so we want to ask, what is the most readable style before we start layering on conventions?
> In general, object orientation is a reasonably elegant way of binding together a compound data type and functions that operate on that data type. Let us accept this at least, and be happy to use it if we want to!
Is it elegant? OO couples data types to the functions that operate on them. After years of working on production OO I've still never come across a scenario where I wouldn't have been equally or better served by a module system that lets me co-locate the type with the most common operations on that type with all the auto-complete I want:
//type
MyModule {
type t = ...
func foo = ...
func bar = ...
}
If I want to make use of the type without futzing around with the module, I just grab it and write my own function
Totally. That structure also allows for multiple dispatch, which makes generic programming much much more pleasant than OO (which is basically single dispatch). Eg. See Julia.
To elaborate, tying methods with the individual/first argument makes it very difficult to model interactions of multiple objects.
> tying methods with the individual/first argument makes it very difficult to model interactions of multiple objects.
The best example of this in Python is operator overloading. We need to define both `__add__` and `__radd__` methods to overload a single operator. Even then, there are situations where Python will call one of those where the other would be better.
- short names inside modules. I.e. you might have a function called Foo.merge(...) instead of x.merge_with_foo(...)
- a way to bring modules into scope so you don’t need to specify the name
- not using that many modules. Most lines of code won’t have more than one or two function calls so it shouldn’t matter that much (other techniques can be used in complicated situations)
The key advantage of type-dependant name resolution is in using the same names for different types. You might want to write code like foo.map(...) and it is ok if you don’t know the exact type of foo. With modules you may need to know whether to call SimpleFoo.map or CompoundFoo.map.
If anything, this is an aid to type-checking. Since MyModule.foo takes only things of type t, the type-checker's job is extremely easy and it will help you a lot more in cases when your program is incomplete.
So often when using C#-style fluent APIs I find that I'm completely on my own and have to turn a half-written line into something syntactically correct before Intellisense gives me anything useful. Using an F#-style MyModule.foo, the compiler can tell me everything.
I agree. Also, here are two things I find not great with OO style apis:
1. Functions with two arguments of your type (or operators). Maybe I want to do Rational.(a + b), or Int.gcd x y, or Set.union s t. In a Java style api I think it looks ugly to write a.add(b) or whatever.
2. Mutability can be hidden. If you see a method of an object that returns a value of the same type, you can’t really know whether it is returning a new value of the same type or if it is mutating the type but returning itself to offer you a convenient chaining api. With modules there isn’t so much point in chaining so that case may be more easily hidden.
I'm sure it enables other things, but I find the "Module.t" thing very inelegant. Names should be meaningful and distinguish the named object from other things that could go in the same place, but "t" is meaningless and there's often nothing else (no other types anyway) for it to distinguish itself from. It feels like a hack, and I much prefer the mainstream dot notation.
Whats the advantage other than some intellisense clean up through hiding some functions? As you're well aware, you can write new functions of OO types as well.
Either you lose private/protected state and the advantages or you replace them with module visible state and the disadvantages.
Doesn't seem like it necessarily stops inheritance. You need to throw away method references or you're just back to open faced classes.
You can still write new methods that blow away any assumptions previously made about type state,causing bugs. It seems like it could be even worse seeing as you possibly have less access controls for fields.
With the typing system of Haskell, maybe Rust too, you can do a lot to prevent the type from being able to represent wrong state. Immutability also does a lot to prevent issues with false state and public fields.
If you want to prevent people outside the module from adding functions, you can do that in Haskell by only exporting the type-name. This prevents other modules from actually accessing any data inside the type, whilst still allowing them to use the type in functions. This includes sometimes making getter functions to read values from the type. I'm pretty sure Rust allows for something similar.
In general, FP still allows for encapsulation. Moreover, it uses immutability and to some extend the type-system to prevent illegal states of data types.
No, a class is a data type coupled to functions operating on that class. For the purpose of this discussion a module is a way to semantically group types and functions without coupling them (languages like OCaml get fancy by actually typing modules which enables some neat ideas, but is not important to this discussion).
A key finding is that people are not creative generally but are instead creative relative to their skills.
e.g. if you are not a skilled programmer your programming is very unlikely to be creative except insofar as it relates to a non-programming skill that you have acquired.
This suggests that most creative solutions will come from people who learn many things. You'd expect people with high IQ and conscientiousness (implies being organized and driven) to be the most creative group. This lines up with the best evidence I've found, and anecdotally matched my experiences as an educator and continues to as a software developer.
I think this may also explain your own experiences. The model predicts that a smart person who learns a lot less about a topic than their less smart peer would be less creative thinking the topic.
[0]: https://www.amazon.com/Cognitive-Explorations-Instructional-...