Guest Contributor: Bryon Moyer, Editor of EE Journal
Sensor fusion has been all the rage over the last year. We’ve all watched as numerous companies – both makers of sensors and the “sensor-agnostic” folks – have sported dueling algorithms. Sensor fusion has broadened into “data fusion,” where other non-sensor data like maps can play a part. This drama increasingly unfolds on microcontrollers serving as “sensor hubs.”
But there’s something new stirring. While everyone has been focusing on the algorithms and which microcontrollers are fastest or consume the lowest power, the suggestion is being put forward that the best way to execute sensor fusion software may not be in software: it may be in hardware.
Software and hardware couldn’t be more different. Software is highly flexible, runs anywhere (assuming compilers and such), and executes serially. (So far, no one that I’m aware of has proposed going to multicore sensor fusion for better performance.) Hardware is inflexible, may or may not depend on the underlying platform, and can run blazingly fast because of massive inherent parallelism.
Of course, then there’s the programmable version of hardware, the FPGA. These are traditionally large and power-hungry – not fit for phones. A couple companies – QuickLogic and Lattice – have, however, been targeting phones with small, ultra-low-power devices and now have their eyes on sensor hubs. Lattice markets their solution as a straight-up FPGA; QuickLogic’s device is based on FPGA technology, but they bury that fact so that it looks like a custom part.
Which solution is best is by no means a simple question. Hardware can provide much lower power – unless sensor hub power is swamped by something else, in which case it theoretically doesn’t matter. (Although I’ve heard few folks utter “power” and “doesn’t matter” in the same breath.) Non-programmable hardware is great for standard things that are well-known; software is good for algorithms in flux. Much of sensor fusion is in flux, although it does involve some elements that are well-understood.
Which suggests that this might not just be a hardware-vs-software question: perhaps some portions remain in software while others get hardened. But do you end up with too many chips then? A sensor hub is supposed to keep calculations away from the AP. If done as hardware, that hub can be an FPGA (I can’t imagine an all-fixed-hardware hub in this stage of the game); if done in software, the hub can be a microcontroller. But if it’s a little of both hardware and software, do you need both the FPGA and the microcontroller?
Then there’s the issue of language. High-level algorithms start out abstract and get refined into runnable software in languages like C. Hardware, on the other hand, relies on languages like VHDL and Verilog – very different from software languages. Design methodologies are completely different as well. Converting software to optimal hardware automatically has long been a holy grail and remains out of reach. Making that conversion is easier than it used to be, and tools to help do exist, but it still requires a hardware guy to do the work. The dream of software guys creating hardware remains a dream.
There’s one even more insidious challenge implicit in this discussion: the fact that hardware and software guys all too often never connect. They live in different silos. They do their work during different portions of the overall system design phase. And hardware is expected to be rock solid; we’re more tolerant (unfortunately) of flaws in our software – simply because they’re “easy” to fix. So last-minute changes in hardware involve far whiter knuckles than do such out-the-door fixes in software.
This drama is all just starting to play out, and the outcome is far from clear. Will hardware show up and get voted right off the island? Or will it be incorporated into standard implementations? Will it depend on the application or who’s in charge? Who will the winners and losers be?
Gather the family around and bring some popcorn. I think it’s going to be a show worth watching.