On a related note, Chick-fil-A using Deep Learning models for object detection with MXNet to track how long fries have been waiting: https://www.youtube.com/watch?v=3Uuq_cX8b1M
It would've been much simpler to have a plate with multiple pressure sensors and a multi-color LED per spot. When you set the fries down on the sensor it activates and the LED turns green. After X amount of time the LED can turn yellow, meaning that it's becoming stale. Finally, after X+N time has passed and it's no longer fresh it can turn red. Removing the fries turns it off. Aside from being way cheaper, I'd conjecture that this would be much more reliable. I'm pretty sure I could get a prototype of this up and running over the course of a weekend or two.
If you wanted to get really fancy I guess you could also track the room temperature and moisture levels and use that to get a better guess of how long a group of fries will remain fresh. Although I don't know if environmental factors like these have enough impact on fry freshness to be worth taking into consideration.
Anyway, it looks like they were just doing this for fun and learning, so I guess it doesn't really matter.
This was a good learning experience for our engineers at our Innovation Center at Georgia Tech in Atlanta. It may bear fruit in the form of a useful solution in the future, though. What is unique to Chick-fil-A is 'volume'. We do a lot of sales in our restaurants, so anything we can do to try and make our team members lives easier is important to us. We want them to enjoy their jobs and we want to do the best we can to consistently create high quality food experiences for our customers. Our teams in restaurants are the heroes, but we are trying to use technology to help them do what they do. <thumbs up>
The computer-vision based version means you just take the first fry off the rack. Yours means the employee needs to assess the LEDs each time (which may only take a second, but is repeated hundreds if not thousands of times a day), reach more for the yellow one in the back, etc. That's a lot more sensors to break down than a single camera, too.
The project will be financed by the Boring company, not by the city. The Heathrow express in London work on a similar concept, it takes 15 min to go to Heathrow compared to 1h with the tube, and they charge ~10 times the price of a regular tube ticket.
Well I took it because I was silly enough to buy a flight with a transfer between London City and Heathrow. The first time I took a cab...238 pounds. Kill me. Next time I took the tubes, and was glad to get on the cushy express after the crowded London tubes...and I had to make my flight so another hour would not have worked for me.
Which is why the Heathrow Connect (as was, it's about to become part of the new Crossrail line and has changed name and operator in the past 2 months I gather), which costs about £11 and runs on the exact same line but stops on the way and takes about 25 mins, has eaten its lunch.
[1] https://github.com/awslabs/keras-apache-mxnet