This is a talk that I made back in February at TechEX conference in London.
edge AI is becoming ingrained in the industry. This is especially evident now in 2025; edge AI is no longer a mythical creature or two separate words in a sentence, but a set of technologies and practices that is proven to deliver real value and is being adopted by largest players in all kinds of verticals. But how does this latest and greatest category advance IoT — a recent technology that has found mass adoption across all sectors?
The convergence of IoT, AI, and edge computing is now reshaping industries. It enables smarter, faster, and more efficient systems. In the talk I highlighted how these technologies are evolving and why their integration is critical for the future of innovation.
In this video I talk about a plugin that I made in Edge Impulse (platfrom for building edge AI).
The general approach can be applied anywhere and standalone though - use a foundational audio classifier to look at your unlabeled audio dataset! This approach is two-fold: first - we give the model a few sampels of an event or sound type we are looking for and determine to which class from the ones the model knows it belongs. It can be though of as encoding our sound events in the model "language".
After that we give larger samples that may or may not contain these events and tell the model to only react when those classes identified earlier are identified.
This way we can explore large audio datasets quickly. There is a good chance that some classifications are not exactly right - but it gives you a subset of you audio to actually take a look at - instead of having to listen though all of it!
Imagine you have a product. And you want it to react to a very specific keyword. Aaand you plan to sell millions of it all over the world. Problem? You don't speak French. Or Chinese. Or Spanish. You could hire lots of people on mechanical turk or a similar platform. But that needs a large upfront investment. Enter Generative AI. In this video I'll show you how you can use Edge Impulse to get all the voice samples you need, train the keyword spotting model and deploy it to the extremely tiny Cortex M0+ based Arduino RP2040 Connect. Magical stuff.
We have built a platform to build ML models and deploy it to edge devices from cortex M3s to Nvidia Jetsons to your computer (we can even run in WASM!)
You can create an account and build a keyword spotting model from your phone and run in WASM directly
https://edgeimpulse.com
Now another key thing that drives the Edge ML adoption is the arrival of the embedded accelerator ASICs / NPUs / e.g. that dramatically speed up computation with extremely low power - e.g. the Brainchip Akida neuromorphic co-processors [1]
Depending on the target device the runtime that Edge Impulse supports anything from conventional TFLite to NVIDIA TensorRT, Brainchip Akida, Renesas DRP-AI, MemryX, Texas Instruments TIDL (ONNX / TFLite), TensaiFlow, EON (Edge Impulse own runtime), etc.
I tried your platform for some experiments using an arduino and it was a breeze, and an absolute treat to work with.
The platform documentation and support is excellent.
Thank you for developing it and offering it, along with documentation, to enable folks like me (who are not coders, but understand some coding) to test and explore :)
This is amazing to hear! Good luck with any other project you're gonna build next!
I can recommend checking out building for more different hardware targets - there is a lot of interesting chips that can take advantage of Edge ML and are awesome to work with
Gesture recognition using the onboard gyroscope and accelerometer (I think - it was 2 years ago!), and it took me some part of an afternoon.
I also used these two resources (the book was definitely useful; less sure if the arduino link the the same one I referred to then), which I found to be useful:
In my experience arm clang compiler often times produced code that is ~10-20 percent faster (hence less energy consuming) than gcc with the same optimisation levels
(Building bare metal code with a lot of DSP and MATMUL)
Yes, especially SIMD Neon where gcc producing horrible Neon code for all versions < gcc-12 even by using simd intrinsics. From version 12 gcc is at same level as clang.
Coming from the telegram bot of hacker news, where this post has a lot of downvotes (21 down to 6 up, which is quite a lot for how many reactions usually are there), could someone (perhaps even from people who downvoted) explain why this news is met with such negativity?
I kind of assume it's due to the cliché Java-oriented hatred, but curious to hear opinions...
> What is this madness? This doesn't sound like engineering.
To me this sounds exactly like engineering.
You craft the tools / apps you want, you build frameworks around complicated concepts that simplify their understanding and usage, at the expense of losing some fundamentalities.
You don't need to know the history and all the evolution of technology to apply it.
It is part of software science, for sure though. You can be good at software science and fundamental concepts, but this also does not imply you are a good software engineer.
Would software in general be better if mentality proposed to you would be standard?
I doubt, because the learning curve to enter the industry and even begin to start doing something would be immense, you would need to study as much as doctors now do, and only after 10 years you would be able to be "trusted" with your work.
I do agree though that there is a bare minimum that one who calls themselves "Software Engineer" should know and understand, like OS fundamentals, basics of compiler theory, etc., etc., but it is not so restrictive as you suggest.
Fair point.
The Things Stack - the server that is hosted by TTN - can also be installed and run locally for free, with the same feature set as the TTN hosted community network.
While the LoRa radio modulation (phy) is licensed by Semtech, the MAC layer protocol - LoRaWAN is completely open and is free to be used, implemented and tinkered with.
https://lora-alliance.org/
Many implementations are also open-source, like The Things Stack from The Things Network.
It can also be run locally.
I'd want the entire stack to be open and free. When Layer 1 is closed and proprietary, I don't really care if I can build open source on top of it, in Layer 2.
It is, and there is a lot of effort being expended to remediate that - there's a reason why some people use pre-2011 thinkpads. I think it would be pragmatic in new endeavors to do things right and correctly from the start, so that remediation isn't needed later.
Sorry, I know that sounds flippant, but hardware manufacturers have become exceptionally good at calibrating this "just enough tyranny to wear down the majority" thing. It's exactly this "not enough to leave my iPhone" attitude that they've figured out how to exploit.
There's a link upstream of fully OSS LoRa alternatives. Your question to me makes no sense: it's like asking if the developers of those alternative technologies use 5G-enabled phones, even though those also use proprietary technologies.
Maybe I should clarify: existing tech is what it is. We probably have to use it. But if we're building new tech, why would we waste time doing it incorrectly, when we have the chance to do it right?
I made a similar guide and a walkthrough video about a year ago, where I cover setup of gateway and TTN, creating a firmware for a device locally, and a lot of other helpful steps and hardware options
edge AI is becoming ingrained in the industry. This is especially evident now in 2025; edge AI is no longer a mythical creature or two separate words in a sentence, but a set of technologies and practices that is proven to deliver real value and is being adopted by largest players in all kinds of verticals. But how does this latest and greatest category advance IoT — a recent technology that has found mass adoption across all sectors? The convergence of IoT, AI, and edge computing is now reshaping industries. It enables smarter, faster, and more efficient systems. In the talk I highlighted how these technologies are evolving and why their integration is critical for the future of innovation.