The difference is the LLM is predictable and repeatable. Whereas a junior dev could go AWOL, leave unexpectedly for a new job, or be generally difficult to work with, LLMs fit my schedule, show constant progress and are generally less anxiety inducing than pouring hundreds of thousands into a worker who may not pan out. This sentiment may be showing my lack of expertise in team building but at worst shows that LLMs represent a legitimate alternative to building a large team to achieve a vision.
I’d use it to pick through and sort the massive pile of Lego my children leave behind after an Easter long weekend. Seems trivial to do identification based on the high quality corpus of block databases. Wouldn’t have to be super quick - you could just leave it running overnight.
I’m sure someone’s written an interesting paper on the ideal sorting algorithm too (i.e. large things > small things vs. ‘just pick up and place the nearest thing’.) I would personally just get it to sort them into basic sets before placing the trays back in their goddamn drawers.
Depends on your defenition of hobby and if the hobby as part of it being enjoyable needs to produce something tangible in the short term. Following sota research, tinkering, trying new things in a field unrelated to your day job can be a fullfilling hobby.
My impression is that the fastest/easiest way to do it would be putting all the pieces inside hopper with the selector at the bottom. Perhaps using compressed air to push the falling pieces into different containers as they fall. I believe there’s already something like that in produce factories, separating vegetables by state/size.
I interviewed once with a company that made rice and grain sorters. They have a massive hopper at the top and pass the grain in a curtain through a machine vision camera, and then decide in real time where each grain goes based on the image.
Apparently pretty much every grain of rice you've eaten has been through a machine like that.
I don't know why this can't just be a cleverly laid out arrangement of layered sieves, each one parsing a different object .. I don't see how it needs to be mechanical in any sense other than "pour in the lego junk, out it comes neatly sorted", a la coin-sorting machines ..
I could see people wanting to add on color detection for the bricks, but even that could still be solved by camera + servos/steppers and a chute that goes to different bins. No need for an arm.
It is probably possible to do it, but the sieves would be quite big, just to account for the very large number of pieces, as well as having to “orientate” them correctly using only gravity.
For a start you're going to need a camera. Maybe more than one. You want depth sensing? Even an cheap choice like a RealSense is going to add another $250 to your costs. And you'll need a sturdy mount for it, the robot's going to vibrate the table and you don't want to suffer motion blur.
Got the camera in a fixed location, over the area you're picking from? Then the robot's going to block the camera's view when it reaches in. No real-time hand eye coordination for you. Putting the camera on the robot's wrist? Now you've got motion blur problems - and reliability problems, because normal USB cables aren't designed for continuous flexing. You've also got a gripper in view all the time - and now the camera moves, things are always out of focus.
The reach of the arm isn't long enough to give you many bins to drop items off into, considering the number of lego parts there are. The longer you make the arm, the greater the torque at the shoulder joint. Making the motors bigger? Now the elbow motor is heavier. Gearing them down? Now you've got gear backlash.
Your Dynamixels will break, for some reason. Maybe eventually you'll figure out why. In the meantime, $50 each please.
Parts like the small satellite dish https://www.bricklink.com/v2/catalog/catalogitem.page?P=4740... will prove very hard to grasp. And there's like 50 different colours, you're going to need to know your way around lighting and camera settings if you want to reliably tell transparent light blue, transparent medium blue and transparent dark blue apart.
And that's before you get into questions like how to tell a 2x4 stud brick apart from two 1x4 stud bricks next to each other - or how to grasp a brick when an adjacent brick is blocking you from getting in with the gripper.
Every single one of these issues is solvable - but by the time you've solved them all? You could have hand-sorted that lego 20 times over :)
I happen to use a few of their cameras, and they generally work as advertised (satisfied Kickstarter backer for OAK D and OAK D lite, probably going to buy the OAK D pro at some point). But, while I did indeed pay less than 250 for them individually, their current active depth offerings are $350(and while my oak d is fine for my lit, varied environment, I do often wish it was a little more accurate). I thought the Lite was also around $200 but it's actually $150 as you said. It's a pretty good little platform for the price. Be sure to check out the experimental repo too : https://github.com/luxonis/depthai-experiments/tree/master/
They really spent a lot of time diving into the complexities of your question and I found it really interesting. Your handwavey, one sentence response without even an example (if there even is one??) is kind of rude in this context.
We have a “Lego sheet” - just a regular bed sheet that goes on the floor before playing with Legos. When it’s time to collect, you make a “bag” with the sheet, grabbing (most of) the pieces, and then we “pour” them into the final container. The sheet goes in top of the pieces on the same container so it’s also the first thing that comes out the next time.
From the item description:
"TIDY UP IN SECONDS: Say goodbye to messy playrooms with our storage organizer! The play mat provides a dedicated area for creative play, and when it's time to pack up, simply gather the handles and tip everything back into the compact storage cube. "
The devil is in the details. In theory everything is trivial with enough SW dev hubris ;)
Even just the path planning towards the block to ensure good grip and pick up is not a simple task. Consider all the block shapes, possible orientations, collisions..
An impractical use would be to have a few of them replace monitor arms in a multi monitor setup. You could then rapidly switch between a few configurations.
Not much. I have a UArm on my desk, which is a lot like this one but with cheaper servos. It was too inaccurate to use for much of anything. I built a force-feedback sensor for it out of a 3D mouse. Reasonable idea, but not stiff enougn for the application.
From over a decade ago: https://us.pycon.org/2012/schedule/presentation/267/ (with just a pair of motors to point squirt gun at specific angles.) One of the (many) cases where a robot arm would be more general without being in any way better :-)
This actually sounds to me as bit complicated. As you need to adjust angle for each pour from a bottle. And then there might be different viscosities involved with some bottles...
Any suggestions from the community on the best way to consume this site as an audio book? I know I could scrape and feed into a text-to-speech library but was wondering: is there is anything off-the-shelf?
Interesting take. Would my HOA be controlling utilization and maintenance? If the car has a flat tire or my neighbor monopolizes the usage, would the larger relationship/service be responsible for fixing the issue?
Sure it would, just as any other service provided for by your condo/HOA fees or what have you. BTW I think having a HOA do anything beyond landscaping is probably more of a headache than it's worth, but still...
I think the possibility is more around SDCs becoming not simply a business to individual customer service or subscription, but rather SDCs being offered by organizations the individual customer is already a member of as part of that organizational offering. This can begin at large and wealthy retirement communities first (like Sun City Grand in Phoenix), but there's no reason why it couldn't be implemented elsewhere or why an organization that is already setup to handle customer and vehicle relationships can't offer it as well. Think AARP or your auto insurance company.
Sounds good in theory. I’m not optimistic about all the car makers cooperating on an industry standard, though. Plus the failure scenario seems pretty catastrophic to overall throughput. I hope some bright thinker figures those problems out; it does seem like a great opportunity
It doesn't seem like you'd need an "industry standard" beyond the actual rules of the road. In the same way that normal human-operated cars with better visibility can change lanes on the highway more safely and efficiently, any improvements in sensor ability, reaction times, etc. in self-driving cars (and even automatic safety features for human-operated cars) should make traffic safer and more efficient.
what do the actual rules of the road say the braking point should be for an 8 person minivan with 5 passengers in dry weather?
what about 8 passengers in the wet at night with ice on the roads?
old argument i know, but if a human gets that wrong and plows a pedestrian they wear the consequences, will an auto manufacturer do the same?
my guess is no, and the result of that is that even if we can get a majority of cars to be self driving they are probably not going to be pushing the limits of road capacity any time soon like GP suggests, more likely they will continue to drive super conservatively and fail fast like the videos in the linked article.
Our team has built a low-code solution to build simple web views and help lower the barriers to making changes. So far it has not saved us any time and arguably has made things worse. The requirements coming in are complex, so instead of building the views in a comfortable environment (our IDE) we are building the same amount of complexity in an environment with rough edges (our LC/NC tool). In my opinion we have misidentified the bottleneck. In reality the time sink is getting clarity on the requirements and understanding the interactions of the components on the page. Whether that page is made in an IDE or in our tool is largely irrelevant.
> In reality the time sink is getting clarity on the requirements and understanding the interactions of the components on the page.
This is the core issue with any sort of complex system. Absolutely nothing matters when the customer is non-responsive with requirements or is not providing enough feedback on prior iterations.
Code vs no-code has absolutely no bearing on this part of the equation.
I do think there is value in low-code or scripted/DSL solutions, but it has to be a very intentional strategy that isn't simply leaning on "configurable == done faster".
"Configurable" is a manager-trap, just like low code. If something's configurable, you now need to forever test all the potential configurations, which always seems to become a combinatorial explosion of tests.
I see this almost every day. A low-code/no-code platform configuration change rolls through the testing lifecycle of straight up code. All the advantages of a low-code/no-code platform fly out the window because the time from feature request to prod is the same as if it were developed from code.
A huge part of my job as a developer is organizing the business person’s thoughts for them, and filling in the non-happy-path 80% of the app that they don’t care about, but their users do.
And doing all that in a coherent way that’s ruthless about maintaining simplicity for as long as possible, including pushing back on business ideas that add marginal benefit but mushroom the complexity of the app.
This definitely applies to Google's PaaS offerings. Google App Engine looks like a great solution except your app is now entirely stuck on a constantly changing platform. The drop-in components they offer are constantly getting deprecated and re-architected with no clear upgrade path. For example many of their original drop-in components were custom (Memcache, Taskqueues, NDB) and are now deprecated with no interoperability with the now recommended 3rd party components. If you depended on these components you now are either existing in a precarious purgatory or you need to rip out and replace all uses of those libraries which completely reneges on the PaaS value proposition