Hacker Newsnew | past | comments | ask | show | jobs | submit | tvarghese7's commentslogin

I thought of N-Cube machines when I saw it, CM didn't even occur to me.


Worked on the CM-1 and CM2. I felt they were awful buggy. At one point they asked if they could use my code to run as a diagnostic, it would break the log() function on occasion.

The Cray fluorinert fountains were way cooler :)


Around the same time (1984), there was also another very cool piece of technology that often gets overlooked: the CMU WARP. It wasn’t as flashy as the Crays and the Connection Machine, but it was the first systolic array accelerator (what we’d now call TPUs). It packed as much MFLOPS as a Cray 1.

It's also the computer that powered the Chevrolet Navlab self-driving car in 1986.


I'd be interested to hear what you thought of the programming architecture.

Excluding the bug side of things. If they did everything they were supposed to how hard was it to get them to perform a task that distributed the work through the machine.

I read some stuff on, I forget, maybe *lisp? I found it rather impenetrable.

On top of this, have there been any advances pin software development in the subsequent years that would have been a good fit for the architecture.

I always thought it was an under explored idea, having to compete with architectures that were supported by a sotware environment that had much longer to develop.


I used them at the (US) Naval Research Laboratory, programming in a dialect of C called C*. This automatically distributed arrays among the many processors, similar to how modern Fortran can work with coarrays.

If the problem was very data-parallel, one could get nearly perfect linear speedups.


This is so cool to read, thank you for sharing!


Reminds me of DOGE :)


Yup, getting the the problem to fit the machine was hard. I thought NCUBE was better than the CM1 for doing actual work.

Some people resorted to just running each node separately, after the initial conditions were set up there was no need for communications. That was one way around having to be clever and adapt your algorithm. They would run for days, sometimes because nobody else was on the machines :)

At one point I got my 2D code running on a CM2 (IIRC), one of the nodes was bad and the Fortran log function always returned 0. So when I made the movie of my simulation there was one tile/rectangle that was just blank. Afterwards they asked me if they could use my program as a diagnostic tool to verify all the nodes were working correctly. :))

I moved one from parallel and supercomputers when I graduated. Don't miss them that much.


It is interesting you mention the log function being what tripped up the broken node. On the CM1 and CM2, log was implemented via Feynman's algorithm for logarithms, which finds factors via a shift and subtract and then adds precomputed logarithms of them via a hit to a table that is shared by all the processors. If one of the nodes had a bad route to that lookup table (however that was actually done), it would result in just that node failing to return a result for a log instruction.


Would you care to elaborate on why?

I had to work on one and didn't really see what the fuss was about. Performance still sucked compared to a Cray2. I mean it was supposed to be a Supercomputer...

I/O was painfully slow so unless you were not planning on looking at your results or your results were a few numbers it may have been ok. We were doing 2d and 3d simulations and periodically outputting large amounts of data to make movies from the simulations. It would run for a few minutes and then stop for several times that to get the data out.


It couldn’t keep pace with a Cray for any tasks that weren’t optimized for its architecture. If your problem fit the SIMD model like the genetic algorithms I was implementing at the time it was just amazing.


Yeah, ours was a fluid dynamics problem that needed information from neighboring cells. So at some point there needed to be data moving between cells and that really killed performance. There was also a global value that need to be computed at each iteration that controlled the simulation speed to ensure the step sizes were not too big.

Also the file I/O was horribly slow. We where scaling and dumping raw data to files so what we could do post processing to make movies of the simulations.


Plus, beyond the fascinating architecture, it was a beautiful machine. The Cray-1 and its successors may occupy more historical memory (see: Sneakers (1992)) but the CM-1, CM-2, and to a lesser degree (in my ever so humble opinion), the CM-5 blow Cray's most iconic machines out of the water on the sexiness axis.


Physical appearance or architecturally?


Sadly I never even saw the CM-2 I used. Just got in via telnet.


If you're in the American Southeast, the Computer Museum of America in Roswell, GA has one on display. I've visited and seen it, along with other supercomputers.


I thought it (CM1) was a turd when I worked on it as a grad student. Looked cool though.

I would go to conferences where people would proudly present their algorithm getting 1MFLOP. Parallel programming was hard then and it is hard now. SIMD made it even harder.

CM2 was a little better since it was easier to get better performance, but we had some of the first machines and stability was not good. I think multiple sites where told they would have the first and TM shipped to them all in pieces simultaneously, so technically they all had the "first" machine. Smart guys.

Still CM-2 was better than the ETA-10 we had. Cray-2 was great.


Yeah she had one of these too.


It was not all that easy to get to and not all that easy to mount a switch securely near by. May be I just lacked imagination.

The real reason and the part I skipped was the leakage current was getting worse over time. By the time I replaced the MICU it was close to 2A draw. It was just a matter of time before it died.


I had a 2005 Acura RL which had some pretty fancy electronics for its time. I gave it to my daughter a few years ago and she started reporting that the battery was dying. It would take several days to die, so at first she just jump start the car every couple of days. When she brought it home (she lives 5 hours away) I measured the current draw at >800mA when it should be <20mA.

These cars have a Bluetooth handsfree module that goes bad and starts leaking power. The fix is to bake them in the oven at 350F for 5min to reflow the solder. Already did this twice and then decided to just get rid of it. So that was not the problem.

She needed to get back so we figured out which fuse was the easiest to remove and she lived with that for a couple of months. In the mean time we figured out it was the MICU module, which is the module that controls all the electronic systems and security. Without this nothing works. I was told a new one would be north of $1200 and labor. If they could find one. The dealership said they would chase down the problem again for $165/hr just to be sure... And I would have to leave the car with them until they were ready. Possibly 3 weeks.

I found a couple of the units on ebay for <$100. One was from a 2008 and another one that had some broken plastic on the case for from an early 2005 (same as my car). I tried the first one, nothing worked, all kinds of systems failed. I learned there was an EEPROM chip in the Honda/Acura MICUs that store the codes to make the systems work and are unique for each vehicle. The Dealership can program them for a fee. Having a full electronics lab, I swapped the chips in no time. It was better, but the starter would not engage. In desperation, I did it again on the 2005 MICU and... success!

Mind you, it took >4 hours the first time to get the MICU out and 2 hours to put it back in. I did this at least five times. I got good enough at it that I could do the whole swap in less than an hour. Back and hands were not happy each time I did it for a day or two.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: