Hacker Newsnew | past | comments | ask | show | jobs | submit | prbs23's commentslogin

As long as you are using the LoRaWAN protocol (not just LoRa modulation), encryption is not optional. It's built into the link layer of LoRaWAN, and a device cannot join a network without a unique key that's shared out of band.

Also, the range is entirely the point of using LoRaWAN for some of us. All other IoT protocols have an abysmally short range, making them impractical for anything other than single building applications. Maybe most don't need a mile of range, but the fact that it can reach across a couple of acres enables a lot of applications.


Interesting concept for a project... From what I have seen in the industry, it seems like this is something every organization ends up developing at least one custom tool for. EDA vendors even generally sell their own solution to this problem, but we always end up back at custom tools. It would be interesting to see if community collaboration could find a better general solution.

That said, it seems like Smelt is far too early in development to be practically used at this point. Some basic table stakes that I didn't see:

- Test weighting. Having a way to specify a relative repeat counts per test is essential when using constrained random tests like we do in the ASIC world.

- Some form of tagging to identify a test as belonging to multiple different groups. When combined with a way to intersect and join groups when specifying the tests to run.

- Control over the random seed for an entire test run. I was glad to see some support for test seeds. However when invoking smelt to run multiple tests, it would be nice to be able to reproducibly seed the generation of each individual test seed. Maybe this is outside the scope of this project?

Great things to see:

- Procedural test generation is a key feature.

- Extendable command invocation

- SLURM support is in the roadmap, also an important feature for groups that use SLURM.


Thanks for checking out smelt!

Reproducible seeding, and more generically having "test list arguments" are definitely something we have in our view, and wouldn't be outside the scope of the project

Tagging is an interesting problem -- do you find that you use any querying schemes emerge when trying to run a specific set of tests, or is running a test group named "A" normally suffice? Test tagging should be on the roadmap, thanks for calling it out.

Re:test weighting, unsure what you're describing exactly -- would you like to have a mechanism to describe how many times a "root test" has been duplicated with some sampling of random input parameters?


This is an awesome write up!

I have one of these scopes, with exactly this issue after a long period in storage. When I was looking into it a few years ago the failure mode was known but I couldn't find a recovery procedure. I'll need to give this a try when I get a few hours. I have been putting off getting a new scope in hopes that I could repair this issue.


Thanks! Go for it or contact Keysight to see if they will cover it. Good luck!


I don't understand why fuzzing hardware is being presented as a new thing here... In silicon validation, constrained random testing has been the standard methodology for at least 10 years. With the complexity of modern CPUs, it's effectively impossible to validate the hardware _without_ using some kind of randomized testing, which looks a whole lot like fuzzing to me. What is new here? Or is this a case of someone outside the industry rediscovering known techniques.


The claim here is that "existing hardware fuzzers are inefficient in verifying processors because they make many static decisions disregarding the design complexity and the design space explored".

"To address this limitation of static strategies in fuzzers, we develop an approach to equip any hardware fuzzer with a dynamic decision-making technique, multi-armed bandit (MAB) algorithms." (From their paper: https://arxiv.org/pdf/2311.14594.pdf)

They're saying their fuzzer is faster and better at finding bugs than other fuzzing aproaches.


Fuzzing and constrained random, while both based on randomisation, are not the same thing.

A big problem of fuzzers from the point-of-view of hardware validation is that it's unclear what coverage guarantees they give. Would you tape out your processor design, once your fuzzer no longer finds bugs, if you had no idea about test coverage? OTOH, fuzzing has been very effective in software, so it is natural to ask if those gains can also be had for hardware.


The grandparent post is incorrect. H/w silicon validation or constrained random testing hasn't been the norm for 10 years, it's at least 20, which us when I first got into that industry.

And yes, we had coverage driven verification back in 2005 as well. No, we didn't "tape out" our CPUs until we'd hit our testing plan which was defined by coverage metrics.

P.s. pre-si verification had testing pipelines way back then before they became the norm in s/w.


> we didn't "tape out" our CPUs until we'd hit our testing plan which was defined by coverage metrics.

Just because you reached the same coverage it doesn't mean your are doing the same thing or that you were equally efficient in reaching it.

Beside, as any tester can tell you coverage is not a fool-proof metric.


Coverage in hardware is different from coverage in software. Solid part of System Verilog deals with that plus part of UVM. Plus coverage from simulation and property checking (aka formal) can be used together.


https://www.prbs23.com/blog/

Started a couple years ago, but I don't have as much time to write as I'd like. The majority of topics relate to ASIC design and verification, since that is my day job, but with some other side projects mixed in.


I rewrote the UI for an off the shelf WiFi digital photo frame so that it shows the latest raw images sent back from the Perseverance Mars rover. https://prbs23.com/blog/posts/picture-frame-from-mars/

The picture frame secretly ran Android under the hood. Which meant I could replace the app which showed pictures pulled from the manufacturers server, with one which pulls photos from the NASA website. Fortunately they left ADB enabled with root permissions, so it was trivial to replace their startup app with my own. All the source code is public here: https://gitlab.com/prbs23/mars-photo-stream


If you are interested in more OSSI varieties like those, check out Wild Garden Seed: https://www.wildgardenseed.com

The owner, the Frank Morton who was mentioned in article, is the original breeder of both Gildenstern and Dazzling Blue (and they probably grew the seed being sold by Fedco). All the varieties he bread (of which there are many) are released as OSSI varieties, plus they sell other OSSI varieties as well.


Most architectures do not provide an interrupt that is generated by an integer overflow. Since this would be a significant architectural change in the hardware, it can't be simply added in.

Additionally, if you are running inside an operating system, handling an interrupt usually incurs a trip through the kernel, which would add extra overhead every time an overflow did happen. Since there's a lot of software which depends on integers overflowing, this overhead on each overflow could significantly impact legacy software.


Using a new instruction none of that would be an issue. And as someone pointed out, on x86 it wouldn't even be a new instruction, INTO already exists. But apparently it didn't make it in x64 because nobody used it :/


I somewhat recently blogged about my process of setting up a private LoRaWAN network as well.

https://prbs23.com/blog/posts/getting-started-with-a-private...

LoRaWAN is definitely an interesting mix of open source implementations and open specifications, and proprietary poorly documented hardware. Unfortunately I haven't had time lately to actually use my network for anything real.


As others have said it's not the concept of DWDM that's new here. That has been around for a while. What is interesting here is that they have managed to integrate a DWDM transceiver onto a single silicon-photonic chip. As someone who has worked in similar silicon-photonic research, the challenge is to create the required filters, modulators, and demodulators that are stable enough over environmental conditions and manufacturing process variation for commercial production. Most silicon photonic structures are quite sensitive to temperature in particular.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: