<BEGIN PSA>
Please, please, please, do not use hover controls. They are useless for touch users as well as screen readers, and the discoverability is poor.
Modals are also really really hard to make accessible, and are often confusing. If you MUST use modals, please use a well-vetted library instead of rolling your own.
There's a lot of things that look really cool but make life difficult for people that aren't you. We appreciate your consideration when you build your UX.
<END OF PSA>
Visual Studio Online use the hover pattern for some buttons. I hate it because it’s make hard to remember what the options are and hard to click on them. Design over function are sadly the norm nowadays.
At least in case of tablets with pens/styluses (like the Surface line or Dell Latitudes), the device can recognize when you're hovering over something with a pen. It meshes nicely with the mouse-oriented design. I wish there was a way for touchscreens to detect hovering fingers, then on-hover controls could be useful on phones and penless tablets.
It's not really a clear distinction, though. I have a laptop that has a touchscreen, but also a regular touchpad and sometimes I use an actual mouse with it. Sometimes I touch the screen if it's easier even if I have a mouse connected. I also sometimes remote into it. Then there's my tablet: I can use a bluetooth mouse with that if I want. Or an external touchpad.
I imagine I'd err on the safe side. If the screen has touch functionality (includes laptops with touchscreen), then don't do actions on hover. Is there a way to reliably detect the existence of touch functionality through the browser?
At least for screen readers, they can work really well as long as you avoid display or visibility properties to hide them. Just position absolute and left: -999em will hide them visually but still let the screen reader 'see' them.
Of course, this will still not really help with touch devices.
I run the Pivotal office in Singapore. Breakfast is wonderful, but it's hardly mandatory.
Standup at 9:06 is important, though. In order for pairing to work, you need to have everybody there at the same time.
The iron bargain is that you work a rigid eight hour schedule, but then you go home (or wherever) and don't think about work. No overtime or crunch mode, ever.
It's not for everybody, but many folks really really really like it.
I've worked with Eno (the OP) on the product that he's talking about (Cloud Foundry). It's worth noting that the first versions of Cloud Foundry were written in Ruby/Sinatra for the reasons that you're talking about.
We switched to Go because Ruby's concurrency model was obtuse/inefficient and scaling the codebase was challenging even with excellent developers and great engineering practices. The performance gains with Go were a really nice bonus.
If you're solving Google-style problems like writing a PAAS, then the benefits of Go outweigh the headaches. If you're writing a more straightforward CRUD-style Web API, then Rails might be a better choice.
I enforce a militant "40 hours only" policy for my employees, and it's incredible for productivity. When you work longer hours defect rates increase, code quality drops, and things are on fire all the time. You also lose really good people due to burnout.
Sustainable pace is super important for creating and maintaining high performance teams. Crassly, it's just a more profitable way of doing business.
I wish more managers would stop buying into the myth of "time in seat == productivity", and look at the real output of their teams. When you actually run the numbers there's a lot of results that run counter to conventional wisdom.
Pfft. A man-hour of labor is a man-hour of labor. Which is why I fully understand why my employer is requiring me (and the others on my team) to make up the 48 hours of labor I lost when I evacuated due to Hurricane Irma (even though the office was of course closed for much of that time).
I'm spending the extra hours perusing/posting on HN and looking at job listings :|
> Pfft. A man-hour of labor is a man-hour of labor.
Not all hours are created equally.
I do my best work with hyperfocus.
I don't just mean more work, but far higher quality work that's at the edge of my intellectual capabilities.
This is the work that's most rewarding and gives results that I'm proud of.
But I can't get significant time "in the zone" on a 40-hour week.
All the normal but necessary distractions take at least four hours a day, and I need at least an hour or two of "non-zone" work before I can hit my stride and achieve flow.
So I find 60-70 hours a week is essential to doing my best work.
But I can't sustain that level of effort continuously for years.
So I balance that out with a week a month of low-intensity "work" (slacking/procrastinating/socializing) plus at least twelve weeks per year of contiguous time for uninterrupted travel, long-distance trekking, rebuilding relationships damaged by hyperfocus, etc.
I think you'll find that pushing back on the four hours of usual distractions per day will be more productive than working long hours.
Personally, I find the usual distractions more tiring than coding, based on years of experience.
I estimate at least 2 hours of work lost per hour of meeting, once you add in meeting prep, calendar wrangling, walking,
after and before meeting back chatter, realizing you have 30 minutes till lunch/commute after/before the meeting, being tired from presenting/listening/arguing, etc.
One strategy is to have meeting-only or meeting-free days, so meetings primarily ruin your productivity for other meetings and not your actual work.
Meeting-only days scale with team size, but can backfire by enabling the total number of hours in meetings to increase.
"I think you'll find that pushing back on the four hours of usual distractions per day will be more productive than working long hours."
Pretty much this. When you are explaing long hours by on the job ineffectively, then you need to deal with on the job ineffectivity - especially if you are in any kind of leadership position. Otherwise you end up rewarding innefective workers and punish workers with better organizational skill.
General feeling that it is ok to work ineffectively because we stay late anyway was one of the things I resented the most in previous job. And people who caused interruptions and wasted everybody time were seen as "hard workers" because they stayed late.
I enjoyed this +1 but sadly and inevitably there are replies saying essentially "no no, all man hours are not equal". Why doesn't the intertube get satire/sarcasm/irony ?
Text is a bad way of conveying tone, though there was the smiley at the end. I interpreted it the same as you, but I can see how a lot of people would miss the tone.
Poe's Law is why the internet doesn't get sarcasm - that without additional information, it's impossible to differentiate between an extreme position and one mocking it.
The comment we're talking about was skillfully constructed, there's only one way of interpreting it. On a phone so quoting from memory, but the second paragraph starting "that's why I am spending the time perusing HN..." is just an elegant way of saying that the first paragraph was tongue in cheek.
Oh, yea. Although, this sort of thing is actually fairly common for my industry (defense contracting), so I'm not sure any of us will end up anyplace better.
The funny thing is, if you just eat the cost, any halfway conscientious worker is going to pay it back in kind. Maybe a few extra hours here or there, maybe a little bit more emotional investment in the work.
Not all hours are created equal. If you're doing simple enough tasks perhaps your output is relatively stable. When I work on hard problems I've had single hours that can outperform a day or two. Getting rid of most/all of those key creative moments by keeping employees in a constant state of fatigue from large hours is bad. Studies show for typical employees after about 6 weeks of 70h work weeks a 70h week is producing less than a 40h week for a fresh non-overworked employee. So pretty quickly hours become half-hours in terms of production.
I partially agree with you, but personal productivity varies. So in our company we have a rule "work as much as you like, usually we expect no less than 30 hours, but it is OK to do 50. Anything beyond these ranges is odd".
Well many managers and execs might respond with "well then, give 'em more work! We obviously aren't giving them enough to do!"
Salary exempt is usually a losing game for developers. Crappy places will say "we pay you a salary to get the work done" and under staff and/or overwork. If it takes you 60 hours then you have to work 60 hours. But if it only took someone 30 hours they can't leave early because a) deadlines will be tightened to ensure 40+ hours or b) more work and assignments will be given.
Same experience. I've used the extra time to start internal side projects, for which a few I've earned $$ for. Surprisingly no one asks how I have the extra time
I used to be someone who worked only 4 hours max in any company and use remaining time to be on top of tech stacks(SysAdmin). But last four years i self trained on stock trading and it close to be becoming my main job in another year or so. Though i now look for trader jobs where i can trade for profit % but the salaries are no where near in my country for traders.
Everyone is different, so good managers set targets based on past performance. If you set the bar at what you can accomplish by working 3-5 hours per day, they'll assume it takes you 8 hours per day, and give you work based on that. If that works for you, great.
Some thinking-intensive work can bring you at the edge in just 3 hours a day. After that, you generate more problems than solutions. Especially mathematicians are prone to this. Similar cases in software development need to be respected.
3 hours of thinking-intensive work is fine. Then take lunch/walk/shower, and sit down to write documentation, update bug reports, chat with a coworker about your ideas, review someone else's code.....
>I wish more managers would stop buying into the myth of "time in seat == productivity", and look at the real output of their teams.
I think a big part of this is the generally ambiguous nature of knowledge work. If you could quantify work into discrete widgets it’s easy to mark people to output, but on stuff that’s fuzzy and hard to quantify it’s not so easy.
It actually takes a fair bit of work and intimate knowledge of how stuff gets done on the ground to make realistic projections about output or to understand what “real output” even means.
Just as a first approximation, I think other knowlege workers who have enough money and clout to be in control of their schedule whole work as a good model for optimal number of hours spent working over the long term. For example, look at doctor's who have their own practice. They (hopefully) like their work, and can determine their own schedule. How many hours do they work? How do they structure their time?
>They (hopefully) like their work, and can determine their own schedule.
Ehhhh. They’re not really as “in control” of their schedules as you would think. Their time is being boxed by financial pressures, generally imposed by insurance and Medicaid/Medicare reimbursement rates, and all the administrative requirements of both complying with burdensome regulations surrounding both their medical practice AND running what amounts to a small business.
They don’t even really necessarily like their work either. Running your own medical practice has a really high burnout rate and many of the people who do it now tend to be older and/or not fully dependent on the practice for their income (e.g. independently wealthy, willing to retire but just wants supplemental income, married and can rely on spousal support financially, etc.).
I'm mostly the same, but I'm less militant about it and, instead, try to empower my team to handle it themselves. That said, I explicitly promise that any time outside of the normal 40 hours should be taken as comp time. So if someone spends 3 hours one evening dealing with a production issue or we have an upper-management-imposed push to get a feature out, there should be an equivalent number of hours taken off when the employee would have otherwise worked. And it's not time that can be banked to use to increase a vacation, it should be taken immediately and if I notice that someone isn't taking it, I'll tell them to leave the office. I've found that employees who aren't overworked can pull a 70 hour week and actually be that much more productive, but they can't do it on a regular basis and they need to recover afterwards.
Also, to combat the "time in seat" fallacy, I encourage my team to take walks every couple of hours, even if it's just around the office, though they should get outside if it's not raining. They can do it in groups, pairs or alone, but I've found that developers are more creative and better able to think through consequences/permutations when they don't spend too much time being sedentary. And it has the dual benefit of breaking the notion that sitting at your desk is the modern day timecard. As you've correctly noted, getting stuff done and achieving quality are what we should measure, not ability to spend time typing and looking at a screen.
Walks are very productive in exercise and clearing head. I often try to take a walk with some people on my team. We just talk about whatever comes into our head - life, hobbies, etc. Sometimes it's work and we end up brainstorming.
We joke if we get caught walking we tell management that we're in a meeting.
My employer has traditionally been very good about 40 hour weeks. However, recently everyone on the team has been doing 55-80 hours per week. On about week 4, I pointed out to my manager that the last time we were on an extended stretch of OT, the company put in place a policy to pay us at an hourly rate equal to our annual salary / 2080 for all OT hours worked. He responded "Yeah, but [project x] was on OT for over a year! Besides, OT is expected at most other companies, and none pay for it." (never mind that most at least have bonus programs). It's now been over 3 months of OT for my team, and personally I think there is no end in sight. Of some slight consolation, the less senior team members do get paid for a portion of their OT (usually around 1/2 to 2/3).
Also, our company doesn't do bonuses at all, doesn't offer anything in the form of comp time, and has an unlimited vacation policy that's basically meant 'next to no vacation' for the past half year or so. Some members of my team have hinted to me that they're looking for new jobs, but I don't think management has a clue.
We use Pivotal Tracker, and it's pretty easy to see how many bugs are popping up and how often.
Code quality shows up in your velocity over time. My favorite saying right now is "quality is future speed". We also do regular reviews to assess things like cyclomatic complexity and how well SOLID principles are being followed.
You are my new favorite person. I love running into these anecdotes - I keep thinking I'm alone in believing that a business can be run according to this policy.
I'm not convinced it's optimal! However, it's a "known good" number that's easy to convince people to follow along with. Running experiments is hard and expensive, so we just haven't tackled that yet.
My personal suspicion is that optimal is probably in the 35-45 range, but I'd love to have more data.
It is "known good" specifically for repetitive manufacturing labor. That's what all of our business structures are optimized for. As work changes to be primarily mental, things will change. It's good that you're thinking about this sort of thing, it will give you a significant advantage in the coming years. Lots of research into how capable humans are of extended periods of mental exertion shows that we can be productive far less than with physical exertion. I suppose it's a good thing that thanks to computers, mental exertion ends up producing productivity multipliers rather than just incremental improvements then!
My suspicion is it's somewhere in the 30 - 40 range. For me personally my productivity starts declining after 30, but I'm fairly certain some of my coworkers can do it longer.
It's what most of the English speaking societies have centred on over time.
Other places say Scandinavia might put it at 37 and some Asian companies might be closer to 45.
But around 40 hours pr week is where most modern societies tend to think work/life balance is reasonable both for the economy and the individual.
The old union slogan used to be 8 hours of work 8 hours of sleep and 8 hours of leisure. And in many ways that the origin of our 40 hours woork week when we only works 5 days a week.
People were happily working around the clock, but I'm fairly certain the people breaking Enigma didn't try doing that more than a fair number of hours a day (fuck if I know what it is, but it's not equatable to making bombs or tanks).
The US was only in that period from Dec 1941 to Aug 1945, just over 3 1/2 years. Assuming that the build-up years (when "everyone knew" we were going to have a war) were not as intense.
I don't think most people working at home during WWII were primarily motivated by concern about their allies abroad. Instead, I'd argue the motivations were:
* supporting a friend or family member serving in the military
* capturing a new economic opportunity (government contracts, etc), following the shockingly hard times of the Great Depression
To that point, there are currently millions of people suffering at this moment around the globe. If you were motivated by compassion, you'd probably find a different job and put in long hours.
How does a total of 40 hours per week affect the predicability of being there at the same time in a way that 35, 39, or 42 hours maximum per week could not?
my teams do the same thing, it works fine as long as there's some degree of predictability in the schedule. My team does a standard 40 hour work week but only 6 hours per day of required "core hours" used for pairing. The other two hours ar e flexible so the folks can adjust their schedules if they want to get in earlier in the morning and leave earlier.
9-4 you'll be in the office and ready to pair, and lunch for everyone is 12-1 regardless of when you got there; but you can come in at 7 and leave at 4 if you want, or get in at 9 and stay til 6 or work at home later if that works better.
you need that strictness of "pairing time starts now" otherwise you have people coming and going and needing breaks at all different times and it gets disruptive.
But the daily schedule is only dependent on the maximum numbers of hours to the extent that the hours have to fit within that weekly window. If we assume a five day work week, and the maximum weekly total is 35 instead of 40, then you may only have 5 core hours (or 6 core hours and one fewer flexible hour per day), but the predicability remains exactly the same. Changing the total hours per week does not mean having to throw out a schedule entirely.
I am afraid I still don't quite see what impact the total maximum hours for a week has to do with predictability of scheduling. What am I missing?
You're clearly the target market for these new MBPs, and that's great. Glad you have a machine that fits your needs.
People with more strenuous computing needs are miffed because the MBP used to fit our use case, and now it doesn't. There isn't a single Apple product focused on the high performance market any more, and some of us are bitter.
I work at one of those "contracting shops" (Pivotal Labs), and I think you're misunderstanding the reason we push TDD so damn hard: It's not about speed. It's about predictability.
Good TDD forces you into building loosely-coupled code that's easy to refactor and change. When new business requirements come in, the effort for implementing them isn't increased because of your previous code getting in the way. There's an overhead to writing the tests, but it's a fixed continuous cost. As a result, you're always in a position where you can ship a new feature in a predictable (and reasonably fast) fashion.
Not doing TDD often leads to tightly-coupled brittle software, which can be very fast to implement but also difficult to change down the road. It certainly doesn't have to, but in reality that's what happens 90% of the time.
There's a few successful startups that immediately spring to mind that use TDD (Taskrabbit and Modcloth spring immediately to mind). However, the real question you should be asking is, "Which startups died because they got buried under the weight of their own codebase?" That's a long and very depressing list. In many of those situations, some TDD might have helped.
In short: TDD won't make your startup healthy, but it helps ward off some of the more common fatal diseases.
I don't understand this. You give a disclaimer saying TDD won't make your startup healthy, but then your second paragraph implies that "Good TDD" is a silver bullet by "forcing you" into building "loosely-coupled code that's easy to refactor and change".
TDD can test a monolithic `public static void main(String[] args)` just as well as the most dainty collection of python scripts. In fact, assuming you wrote the two to behave the same, your tests wouldn't know the difference. Isn't one of the larger problems of TDD that it simply tests quantifiable results, not profile the machinations of the code?
Writing truly good code is hard. It takes time, practice and a wealth of knowledge... No "process" (as the GP states) will shortcut these requirements for you.
You can have a polished high quality codebase, but still have a dying startup because you've built something that nobody wants. Conversely, having a desirable product makes up for a vast multitude of sins.
However, technical debt is expensive. Having a clean well-factored codebase makes changes cheaper, which means your company will have lower overhead and be more responsive to market demands.
As far as "forcing you to build loosely coupled code", good TDD should involve a thin layer of integration tests (Selenium/Capybara/Whatever), which drive out unit tests for the individual components. If you let the tests drive your code design, and follow the "Red -> Green -> Refactor" workflow, it tends to shepherd you into writing small easily testable functions and objects that are loosely coupled.
You can also use TDD to salvage crappy code, and derive good design even after the damage has been done. For a beautiful demonstration of this process at work, I strongly recommend Katrina Owen's video on "Therapeutic Refactoring". http://www.youtube.com/watch?v=J4dlF0kcThQ
Of course, there's no substitute for having good technical instincts. I couldn't agree with you more on that point. TDD isn't a silver bullet. It's just a damn useful tool, and more startups should be using it.
> "Which startups died because they got buried under the weight of their own codebase?" That's a long and very depressing list. In many of those situations, some TDD might have helped.
Could you give some examples? PG doesn't include that in his list: http://www.paulgraham.com/startupmistakes.html, and in my personal experience I've never met a startup that failed because they had an unmanageable codebase.
Quite a few actually. I can name two off the top. You've probably never heard of them (why would you, they failed). Saber, and Lucid. Oh, there are certainly others. What kills them is a code base that's too tangled to manage. Estimates grow by an order of magnitude, costs skyrocket, defects mount, customers flee, investors cut their losses.
> Not doing TDD often leads to tightly-coupled brittle software, which can be very fast to implement but also difficult to change down the road. It certainly doesn't have to, but in reality that's what happens 90% of the time.
This is flatly wrong. It may be a tool for productive programmers to keep their code on the right track, but assuming that loosely-coupled code isn't written without it is a huge overreach. Certainly, tests do help, but TDD is just a tool in the toolbox. I have seen many very talented programmers and many of them do not need TDD to produce stable, well-architected projects.
I can only speak from my experience (20 years) of developing web apps, but I've never seen a cohesive, loosely coupled web app, that was both over 2 years old AND not using some form of TDD.
Phrased that way, I agree completely. An established codebase needs to be verified in some ways. But writing code and running it against tests is not exactly equivalent to TDD.
TDD as I understand it is characterized by writing a failing test, followed by implementation code, followed by test-fixing, etc.
However, I can write plenty of good code that does what it's supposed to and works and is stable. THEN I'll refactor as needed, write tests, and since I anticipated my needs, making those tests good and the code testable will be relatively straightforward. That's not TDD, though. It's a pragmatic approach that doesn't prioritize setting requirements (or solidifying an API) over starting simple and iterating quickly.
Funny, I had a comment ready before Matt's that started "I can only speak from my experience..."
So, yeah...in my experience this has played out exactly so: in practice, coders who ignore TDD are probably also ignoring a lot of other good practices. The two principles may be "orthogonal" in theory, but the correlation has strong anecdotal support for me.
What's really tragic here is that the truth in Mr. Daisey's story will get dragged down by the weight of his lies.
Chinese factory conditions are often horrible, and there's often a blatant disregard for human life and dignity. Mr. Daisey did a pretty good job of conveying these ideas in a way that well-heeled westerners could understand at a gut level.
However, his pursuit of storytelling over journalism is going to destroy all that. People are going to (rightfully) pitch the fact out along with the fiction, because there's no way to distinguish the two.
What's really tragic here is that the truth in Mr. Daisey's story will get dragged down by the weight of his lies.
[disclaimer: I have never taken journalism classes but I date someone majoring in journalism in college :-)]
Disclaimer aside, I believe you comment is the whole 'moral' of the story. And were someone to write a pulitzer prize winning fictional story about a journalist who was so passionate about the topic they were reporting on they stepped into the cess pool of making up 'facts' and by doing so, lost their soul, and the thing that they were most passionate about gets dismissed and ignored. Its like a Greek tragedy except that instead of the hero dying its some noble cause that dies because of the acts of a selfish reporter.
This is why people who want to be known as journalists have to never, ever, cross that line. Sadly it has a similar mechanism to cheating on your spouse, you do it once and don't get caught and its thrilling and exciting and nouveaux so you want to do it again, and again, and again. And then you do get caught at some point and all the good that was your marriage goes "Poof!" in an instant. (not a personal experience but related by folks daily it seems).
As a literary tool it is very powerful, you can relate to the protagonist's passion, but cannot forgive their transgression.
What "truth" in his story? How do you know? The most dangerous thing you can do with a story like this is try to pick apart the things that seem true; your brain is wired to make the wrong things seem authentic.
I know because I've seen it. My wife and I used to live in China, and we did traveling in both the high and the low places.
I can't speak to the particulars in Daisey's story (since I don't know which ones were made up), but the overall picture he painted sounded familiar. It's unquestionable that there are horrible working conditions in Chinese factories.
That's the tragedy here. There's millions of awful wonderful horrible stories you can tell about Chinese factories, and Mike Daisey is obscuring them by making up his own.
And how would you compare the work in the factories to being a Chinese farm worker, for example? Why are millions choosing factories over farms?
I think it is more important that there is a trend towards improved work environments and opportunities than bemoaning the fact that any particular work environment isn't as good as some cushy 1st-world white-collar job.
Oh, no. Again with the "choosing". Since I read similar arguments quite often here on HN, I want to give a general, rather than particular answer. The short version is, people under duress can't really "choose" anything, even if it may seem to some free-market capitalists as if they do.
Let me paint a fictional picture for you. Say there's this island that's been ravaged by natural disasters and famine, and its entire population is starving. Rich people from a nearby country realize the islanders are in real distress and will be willing to do a lot for anyone who will save them from certain, horrible death, so they decide to use them as workers. So the rich people set up shop on the island, and the locals start working for them. Some of the rich people pay the workers (each of them producing, say $100's of product a day) with one loaf of bread, oh, and they also rape the women. Some pay their workers $2 a day, and they don't rape the women. Naturally the locals love the latter group, while that group of rich people from the mainland describe how they've saved the poor islanders from certain death and have even treated them rather humanely - well, at least compared to some others. They praise the wisdom of the market who let them gain from the islanders' cheap workforce, while letting the islanders survive.
Now, this (fictional) story is not about China. It's about anywhere an impoverished population, or anyone under extreme duress, lives. Speaking about choice under such conditions is preposterous. People will choose a quick death over slow mutilation. People will choose enslavement over annihilation. Believe me, there's no "free will" at work here, and whether or not the rich people from the mainland have done any good to the island, they certainly have nothing to brag about because they're exploiters who've saved a population merely to enslave it.
My story is extreme and I am not making any direct comparisons. I am saying that poor people are not "free" and they most certainly can't "choose", and if they do have a sliver of choice in their miserable lives (akin to the choice an innocent man might have for his last meal before execution), there is certainly nothing for anyone here to be proud about - certainly not about providing them with "alternatives" (like commuting the innocent's sentence to a life of forced labor).
As another anecdote for the lack of social mobility that the vast majority of people experience, how many people born into wealth has anyone witnessed transition into the working class? What's the ratio of that population to people born into wealth who stay wealthy? Just as it is extremely difficult to be born poor and end up rich, so it is the other way round. People often forget the ever looming hidden factors in these situations; morale and mind-set. From all of the research I've done on what makes people successful, it's all in the mind. From all the research I've done on what makes people change, it's all in the environment. In other words, mindset is intimately linked with the conditions one finds himself in which in turn dictate actions. A nuance of this is that mind-set also modifies environment which, in turn, affects mind-set again. It's a self-modifying system; the feedback loop that is your consciousness.
This is all hardly surprising - just as individual physical particles have inertia, so do we.
And how would you compare the work in the factories to being a Chinese farm worker, for example? Why are millions choosing factories over farms?
I hear this argument a lot, and I consider it BS. What their original work was has no bearing on whether the conditions in the factories are good or bad. It just tells us that they are better than the alternative.
Working as a human slave in 16 hours shifts for money barely enough to make it day-to-day is still preferable than dying of hunger in a chinese village because of lack of resources. That doesn't mean that it is good in itself, or that it should not change.
Doubly so if the chinese farms and village economies are broken down by pollution, regulations and measures taken to ensure the farm population is driven to work in factories.
You know, like the Stalin-era forced industrialization of USSR (which is not unknown in the West too: that's how most workers where forced to leave previously livable farm work and seek work in industry, starting from the 18th century Manchester (UK) case to the colonies. In most cases it was not a contemplated choice of what is most beneficial to the individual, but a forced decision).
I'm somewhat interested in a curious technical aspect that Daisey put into the story.
He claimed that something to the effect of "nothing there was made by machine, everything tiny little chip is put there by hand" - which I found hard to believe then and even harder now.
Take a bare PCB, screen print it with solder paste (sometimes glue), run it through a pick and place machine (or a series of machines) then put it through a reflow oven. Once it comes out the oven you give it an auto test, and send it to have the rest of the components put on. (Some things will be damaged by heat, or can't be placed by machine. Examples include connectors on the "wrong side", but I guess massive machines have solved this problem.)
Note that in a huge factory you'd need few people for all the steps until the product finishes testing - most of it is automatic.
Even in tiny factory one person can keep a line running producing a lot of product by pushing a few buttons now and then, and moving boards from one machine to the next.
Having said that, a lot of stuff is done by hand; tiny coil winding and placement is one example.
EDIT:
Here's a UK documentary where youth went to a factory in the Philippines. The tech episode is ep5. I don't know if it's available online anywhere.
I don't know how you'd place some of the tiny surface-mount chips by hand, especially if the neighboring chips weren't already soldered down. Check out the new iPad teardown's photo of the logic board.
Small SMD are tricky to solder by hand, especially when nearby component are so close, but it is not impossible.
Now mounting a BGA without a largely automatic machine would be stupid for high volumes (and even for very small one, and maybe even for unitary prototypes), and in this case it would also be stupid to not pick and place all SMD.
Indeed even with very low wages, it would probably be stupid to mount a board manually except for very low volumes and very small boards. Mouting a small SMD can be done quickly with practice, but you are amazed when you count the amount there is (try on an iphone photo or on a random motherboard)
My impression is that you should not just a dire or unfair condition based on the existence of an even more dire or unfair condition.
If there is a famine in an area, and people are dying, is it OK to offer them food and water in exchange for slave labour? No pay, just food and water, and everybody that doesn't like can go and die of hunger. How does this deal sound?
A forced choice is not much of a choice.
Also check how the once substinent farm economy is affected by industrialization, pollution and the state push for ever more production.
Hey batista. Good to see you here. Sorry I didn't read your comment before writing a very similar one somewhere else on this thread. Keep fighting the good fight!
If there is a famine in an area, and people are dying, is it OK to offer them food and water in exchange for slave labour? No pay, just food and water, and everybody that doesn't like can go and die of hunger. How does this deal sound?
My impression is that like most Westerners living today, you have no earthly idea what the term "slave labor" actually means, so we'll leave it at that. Peace.
> My impression is that like most Westerners living today, you have no earthly idea what the term "slave labor" actually means, so we'll leave it at that. Peace.
And my impression is that I very much have an accurate idea, plus I hardly consider myself a "westerner", and not only because I'm some 15,000 miles away from the nearest Walmart. It's not like we grew up with Fox News (or NPR at best) -- we have a penchant for history in these here parts plus we have been living it in the live, frequently (including now).
OTOH, you might be unaware that beside the basic determinant (i.e no choice on the matter), slave labour has had a great variety of working conditions in the past. Some were even better that Foxconn by a lot (say, a black lady cooking and/or taking care of the kids in a huge Southern estate would often be treated quite like family -- or in Ancient Greece slaves were even able to amass their own fortune and run businesses, while remaining legally slaves).
When did he pick apart things in this story that seem true? This is hardly the first piece of evidence suggesting that factory conditions in China aren't great- there is a wealth of evidence. If you want something to watch, check out China Blue, for example.
The point is that this will do damage to the existing, unchallenged evidence, because it'll have some kind of guilty by association attached to it.
"This is hardly the first piece of evidence suggesting that factory conditions in China aren't great"
It's funny how people want to whitewash how Daisey slandered Apple here. He wasn't speaking generically about Chinese factories -- the entire piece is built around a crucifixion of Apple. Apple, the bad Chinese sweatshop exploiter, that's what tens of millions of people who read the wave of publicity from this story took away from it. Each piece I've read has a few bits tacked on about "the rest of the industry" almost as an afterthought.
Maybe Apple's the worst and deserves to be raked over the coals. But if the truth is that Apple holds it's suppliers to higher standards and should be held up as an example for other companies well you wouldn't know it.
"In our original broadcast, we fact checked all the things that Daisey said about Apple's operations in China," says Glass, "and those parts of his story were true, except for the underage workers, who are rare. We reported that discrepancy in the original show (...)"
Apparently the lies are more about what he claimed to have personally experienced.
"Chinese factory conditions are often horrible, and there's often a blatant disregard for human life and dignity. "
Yes, and Chinese businesses also put melamine in baby formula sold to Chinese families, that made hundreds or thousands of babies sick, and probably killed some.
Frankly, if the Chinese factories are disregarding human life and dignity, it's probably not something pushed on them by Western companies.
If anything, this kind of Chinese business ethics is something that will impede Western companies trying to clean up their supply chains.
This is fantastic news. WebOS has always been full of great ideas, but held back by shaky underpinnings. It's like the inverse of most open-source projects. Great UI, not-so-hot implementation.
There's a lot of folks (myself included) that would have been happy to jump in and assist, but couldn't due to the closed-source nature of the platform.
The real prize, however, is Enyo. It's a great Javascript framework, and works in virtually any webkit-based browser. The biggest problem was the licensing terms: You could only use it on WebOS phones. Now that barrier's being removed.
I'm really looking forward to this. Can't wait to dive into the code. :)
Seconding the excitement about Enyo. This could/should be absolutely huge. Speaks to the level of disorganisation in HP that the blog they link to has no mention of the open sourcing, though. Woops.
There's a blog post on it now. I don't know how much disorganization that speaks to: Maybe the PR person jumped the gun by a few minutes, maybe the blogger was a few minutes late, or maybe they posted simultaneously and then discovered that their blogging platform has a longer cache than the PR wire. :-)
I will check out Enyo, thanks (and megaduck too) for the tip!
They do mention it in their press release: "HP also will contribute ENYO, the application framework for webOS, to the community in the near future along with a plan for the remaining components of the user space."
- Because WebOS has more real devices out there, and for cheap.
- Development for the platform is ~solved~ and nice (including nice emulator images)
- I prefer the card view / just type experience to what I know from MeeGo []
: I was a huge Maemo fanboy in the past, got a couple of their shirts and went to their summits. Still own a N810, but it's mostly a clunky music player now. For me Maemo & MeeGo died by now, WebOS is in a state between worlds and might survive (after this announcement) with some support.
I guess that I'm kind of worried this won't be a good thing (at least for Web OS). Without any concrete plans for upcoming Web OS devices, it is unclear whether HP will actually continue to develop the OS. There aren't all that many devices that can run the current version of Web OS - just the Touchpad and Pre3 AFAIK, and the Pre3 was barely released). It doesn't appear that these are being currently produced or sold. I guess it is possible that Web OS will be ported to Android devices, but the lack of an official development platform is definitely a roadblock.
I'm not sure whether Web OS can survive without both current devices and active development by HP. I'm fairly certain that Open Source developers alone won't be able to advance the platform to the next level, although I'm sure that there is a lot that they can contribute.
With that said, I'm sure there is a lot of interesting technology that can be mined out of Web OS and integrated into other devices.
If they had done this before Nokia bought in to Windows Phone 7 then we might actually have seen a great user interface on Nokia's excellent hardware.
Maybe RIM could adopt it instead? They could really use a better UI, plus a platform that actually scales well to tablets. Plus they have a large enough install base that once they get rid of their disastrous programming environment developers might actually start writing good apps for their phones.
I agree that Nokia would have been a nice fit for WebOS, but people who don't think Windows Phone is has a great interface for a phone are squarely in the minority.
But yes, it's great that two things came out of Nokia+Microsoft: Microsoft finally has a really solid mobile hardware partner, and Nokia finally has a well-designed operating system.*
*That they care about. Shed a tear for MeeGo Harmattan.
Nokia switching to an open-sourced WebOS would never have happened. They already had a beautiful, slick and usable operating system, and at least some kind of a strategy of bootstrapping it into a viable ecosystem (using QT on Symbian as the incentive to develop for QT on Meego).
It's still hard to say whether the decision to basically can Meego and bet everything on WP7 was the right one. But at least there was some kind of a case to be made on WP7 having a larger potential ecosystem, and Microsoft being willing to give Nokia huge bags of money. For WebOS it would have been a case of downgrading in every important sense. No real eco-system, no in-house experience, no other company backing it, no real technical advantages over Meego.
I've been an iPhone user since 2007 and have had every model with the exception of the 4s. I fully intended on moving to Android, specifically the Galaxy Nexus, but after playing with the Mango UI I bought the Samsung Focus S and I'm glad I did. It is by far the best OS I've used on a phone yet! It makes the iOS UI simply look dated (which in all honesty it hasn't changed much since 2007).
Unfortunately, the app ecosystem has not caught up to the system UX yet. A lot of the apps are slower and have fewer features than their iOS/Android counterparts. That seems to be the case with webOS as well. Hopefully, these platforms catch on a little bit to provide a refreshing counterbalance to the app grid UX of iOS/Android.
There are definitely far fewer apps in their app marketplace. However, the core apps that I use quite frequently are there and are just as good as their iOS counterparts (if not better...both Spotify and Twitter I prefer on Mango). There are a few exceptions here for me like 1Password which is not near as good as the iOS version but to be fair they offer it for free on Mango compared to the 15.00 price tag on iOS.
The one thing I'm really excited about though is that the developer tools are second to none. Microsoft has made it very simple to build very nice looking applications very quickly. As a developer that just moved to this platform I see the limited apps as great opportunity to help improve this platform even further and I plan on doing just that!
So you're saying you're a "developer" that was using iOS, planned on going to the Nexus and then you just decided to go to windows phone after playing with a Focus S and now after getting your new phone suddenly you're going to start developing for windows phone. Sure.
I started my career as a developer working on the Microsoft platform. In fact, I'm probably still most comfortable in C# more than any other languages. My current job I work in a mixed environment of C#, Java, and Ruby. I'm not sure why the sarcastic comment without knowing my history as a developer?
RIM has already settled on QNX for its next-generation OS (their tablet runs on it), and I think they just don't have the software/UI chops to handle it either way.
I can see WebOS being concurrence/counterbalance to Android and W7P (mostly Android) for other phonemakers than Samsung (who already has bada)
QNX is the underlying RTOS. Application UI is done in Adobe AIR, which is a dead platform. JS/HTML5 on top of QNX would capitalize on their strengths (more responsiveness over Android) and patch over their weaknesses.
When the Playbook was launched, you could develop both AIR and HTML5 apps. All their other SDKs have been in beta ever since.
HTML5 apps are apparently wrapped in AIR internally, and maybe that's what made my app more laggy than in iPad Safari. But HTML5 app dev is nothing new for RIM.
> JS/HTML5 on top of QNX would capitalize on their strengths (more responsiveness over Android) and patch over their weaknesses.
We're talking about RIM here, how would web technologies (a game they are still late at) "capitalize on their streights"? Let alone "patch over their weaknesses"?
I'm guessing that nailer is referring to the fact that QNX is a realtime OS, which could provide decreased UI latency compared to Android. QNX is a "strength" of RIM, so using it to their advantage would "capitalize on their strengths."
As an aside, I always thought it was weird that QNX was owned by Harman for a while...
Continuing to use QNX's RTOS would capitalize on their strengths.
Having a application development platform that has actual developers and a growing community would path over the weakness of Adobe AIR, which has neither.
The PlayBook ships with a recent version of WebKit and all Blackberry 7.X phones contain a WebKit based browser. Starting with the Torch last summer most BB phones shipped with WebKit. The days of the Java based browser on BlackBerry's are over.
Please correct me if I am wrong, but that is not the main benefit of WebOS. I believe the main benefit of WebOS is that it supports HTML5/JS applications natively in the OS, and they are not bound by a browser frame.
Nobody wants to have to open a browser and load a bookmark to get to an application that has browser widgets taking up space at the top, bottom, and side of the screen. They want a home screen icon they can tap once and get an application with native look and feel. If developers can write that app in HTML5/JS then it's a win/win for both developers and users. Developers get a highly portable framework that allows them to rapidly develop software and users get a native app experience.
QNX is a micro kernel, theoretically they could support multiple application programming environments at the same time on the same phone, and they should. Ie they could now chirn out a phone that can support webos, rim and android apps simultaneously
While I agree I really don't see hardware manufacturers adopting webOS. Since webOS is now Open Source it would be much easier for them to adopt the innovations of webOS (particularly interface wise) and use it as a shell on top of Android. That way you get the Android marketplace and the advantages of webOS.
I guess HP could enforce the patents to keep those innovations locked to webOS but it's kind of hard to do that AND promote webOS as an open solution for people to use.
Full disclosure up front: I work for IndieGoGo. I can't speak for other crowdfunding platforms, but I assume that they're similar on this topic.
A lot of people bring up the possibility of fraud with these crowdfunding sites (IndieGoGo, Kickstarter, Rockethub, etc.). While fraud is definitely a concern with crowdfunding, it turns out to be a much smaller problem than you would think.
First of all, the very nature of crowdfunding seems to act as a natural brake on fraudulent behavior. While it's easy to fool one person, most campaigns have dozens or hundreds of contributors. It's trivially easy for any of those contributors to sound the alarm (to the crowdfunding platform and other contributors) if there's anything fishy.
To reinforce this point, it's also been shown that "outside contributions" (contributions from people not personally connected to the campaign or campaign owner) generally don't happen until a campaign's received around 30% of their target amount. Social proof is really important, and it's hard to get if you're a scammer.
Secondly, we (IndieGoGo) take fraud very seriously and invest a lot of resources into preventing it. We've got several layers of automated fraud detection, as well as a couple sets of human eyes on every campaign. While we can't guarantee fraud won't happen, we make every effort to make it vanishingly small.
In a nutshell, we're working pretty damn hard to make crowdfunding a safe and trustworthy marketplace. So far, the results seem pretty positive.
As for the specifics of this legislation: No comment, other than "We're looking at it."