March 26, 2023 8:20 AM
Image Credit: Andrey Suslov/Getty
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
In the current economic climate, R&D dollars must stretch further than ever. Companies are frowning on investments in large greenfield technology and infrastructure, while the risk of failure is contributing significant pressure to project stakeholders.
However, this does not mean that innovation should stop or even slow down. For startups and large enterprises alike, working on new and transformative technologies is essential to securing current and future competitiveness. Artificial intelligence (AI) offers multifaceted solutions across a widening range of industries.
In the past decade, AI has played a significant role in unlocking a whole new class of revenue opportunities. From understanding and predicting user behavior to assisting in the generation of code and content, the AI and machine learning (ML) revolution has multiplied many times over the value that consumers get from their apps, websites and online services.
Yet, this revolution has largely been limited to the cloud, where virtually unlimited storage and compute — together with the convenient hardware abstraction that the primary public cloud services providers offer — make it relatively easy to establish best-practice patterns for every AI/ML application imaginable.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
AI: Moving to the edge
With AI processing principally happening in the cloud, the AI/ML revolution has remained largely out of reach for edge devices. These are the smaller, low-power processors found on the factory floor, at the construction site, in the research lab, in the natural reserve, on the accessories and clothes we wear, inside the packages we ship and in any other context where connectivity, storage, compute and energy are limited or cannot be taken for granted. In their environments, compute cycles and hardware architectures matter, and budgets are not measured in number of endpoint or socket connections, but in watts and nanoseconds.
CTOs, engineering, data and ML leaders and product teams looking to break the next technology barrier in AI/ML must look towards the edge. Edge AI and edge ML present unique and complex challenges that require the careful orchestration and involvement of many stakeholders with a wide range of expertise from systems integration, design, operations and logistics to embedded, data, IT and ML engineering.
Edge AI implies that algorithms must run in some kind of purpose-specific hardware ranging from gateways or on-prem servers on the high end to energy-harvesting sensors and MCUs on the low end. Ensuring the success of such products and applications requires that data and ML teams work closely with product and hardware teams to understand and consider each other’s needs, constraints and requirements.
While the challenges of building a bespoke edge AI solution aren’t insurmountable, platforms for edge AI algorithm development exist that can help bridge the gap between the necessary teams, ensure higher levels of success in a shorter period of time, and validate where further investment should be made. Below are additional considerations.
Testing hardware while developing algorithms
It’s not efficient nor always possible for algorithms to be developed by data science and ML teams, then passed to firmware engineers to fit it on device. Hardware-in-the-loop testing and deployment should be a fundamental part of any edge AI development pipeline. It is hard to foresee the memory, performance, and latency constraints that may arise while developing an edge AI algorithm without simultaneously having a way to run and test the algorithm on hardware.
Some cloud-based model architectures are also just not meant to run on any sort of constrained or edge device, and anticipating this ahead of time can save months of pain down the road for the firmware and ML teams.
IoT data does not equal big data
Big data refers to large datasets that can be analyzed to reveal patterns or trends. However, Internet of Things (IoT) data is not necessarily about quantity, but the quality of the data. Furthermore, this data can be time series sensor or audio data, or images, and pre-processing may be necessary.
Combining traditional sensor data processing techniques like digital signal processing (DSP) with AI/ML can yield new edge AI algorithms that provide accurate insights that were not possible with previous techniques. But IoT data is not big data, and so the quantity and analysis of these datasets for edge AI development will be different. Rapidly experimenting with dataset size and quality against the resulting model accuracy and performance is an important step on the path to production-deployable algorithms.
Developing hardware is difficult enough
Building hardware is difficult, without the added variable of knowing if the hardware selected can run edge AI software workloads. It is critical to begin benchmarking hardware even before the bill of materials has been selected. For existing hardware, constraints around the available memory on device may be even more critical.
Even with early, small datasets, edge AI development platforms can begin providing performance and memory estimates of the type of hardware required to run AI workloads.
Having a process to weigh device selection and benchmarking against an early version of the edge AI model can ensure the hardware support is in place for the desired firmware and AI models that will run on-device.
Build, validate and push new edge AI software to production
When selecting a development platform, it is also worth considering the engineering support provided by different vendors. Edge AI encompasses data science, ML, firmware and hardware, and it is important that vendors provide guidance in areas where internal development teams may need a bit of extra support.
In some cases, it is less about the actual model that will be developed, and more about the planning that goes into a system-level design flow incorporating data infrastructure, ML development tooling, testing, deployment environments and continuous integration, continuous deployment (CI/CD) pipelines.
Finally, it is important for edge AI development tools to accommodate different users across a team — from ML engineers to firmware developers. Low code/no code user interfaces are a great way to quickly prototype and build new applications, while APIs and SDKs can be useful for more experienced ML developers who may work better and faster in Python from Jupyter notebooks.
Platforms provide the benefit of flexibility of access, catering to multiple stakeholders or developers that may exist in cross-functional teams building edge AI applications.
Sheena Patel is senior enterprise account executive for Edge Impulse.
Jorge Silva is senior solutions engineer for Edge Impulse.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read More From DataDecisionMakers
Source : VentureBeat