StarFire的智能终端AI能力进化之路

admin 2023年5月6日10:58:37评论28 views字数 7327阅读24分25秒阅读模式

StarFire的智能终端AI能力进化之路

导读

StarFire 是 OPPO 自研的端云协同机器学习平台,该平台把云端能力、算力与终端设备相结合,是安第斯智能云的六大核心能力之一。为了促进 AI 模型的开发进程,提升智能终端的AI能力,StarFire 与 Google Cloud 以及高通合作,首次将 Google Cloud Vertex AI Neural Architecture Search (Vertex AI NAS) 落地到智能手机上。本文将着重探讨在智能终端上开发AI模型的挑战,Vertex AI NAS 对 AI 模型开发的优势,详细说明 OPPO 如何使用 Vertex AI NAS 提升智能终端上 AI 模型的能效。

Consumers today have more options than ever, which means businesses need to be dedicated to bringing the best-possible device performance to end users. At leading mobile device manufacturer OPPO, we’re constantly exploring ways to make better use of the latest technologies, including cloud and AI. One example is our AndesBrain strategy, which aims to make end devices smarter by integrating cloud tools with mobile hardware in the development process of AI models on mobile devices.

OPPO adopted this strategy because we believe in the potential of AI capabilities on mobile devices. On one hand, running AI models on end devices can better protect user privacy by keeping user data on mobile hardware, instead of sending them to the cloud. On the other hand, the computing capabilities of mobile chips are rapidly increasing to support more complex AI models. By linking cloud platforms with mobile chips for AI model training, we can leverage cloud computing resources to develop high-performance machine learning models that adapt to different mobile hardware.

In 2022, OPPO started implementing the AI engineering strategy on StarFire, which is our self-developed machine learning platform that merges cloud with end devices and serves, forming one of the six capabilities of AndesBrain. Through StarFire, we’re able to take advantage of various advanced cloud technologies to meet our development needs. To facilitate the AI model development process and enhance AI capabilities on mobile devices, we’ve collaborated with Google Cloud and Qualcomm Technologies to embed the Google Cloud Vertex AI Neural Architecture Search (Vertex AI NAS) on a smartphone for the first time. Let’s explore what we learned.



Challenges of developing AI models on moblies devices 


One major bottleneck of developing AI models on mobile devices is the limited computing capabilities of mobile chips compared to computer chips. Before using Vertex AI NAS, OPPO’s engineers mainly used two methods to develop AI models that can be supported by mobile devices. One is simplifying the neural networks trained on cloud platforms through network pruning or model compressing to make them suitable for mobile chips. The other is adopting lighter neural network architectures built on technologies like depthwise separable convolutions. 

These two methods come with three challenges:

1. Long development time: To see if an AI model can smoothly run on a mobile device, we need to repeatedly run tests and manually adjust the model according to the hardware characteristics. As each mobile device has different computing capabilities and memory, the customization of AI models requires significant labor costs and leads to long development time.
2. Lower accuracy: Due to their limited computing capabilities, mobile devices only support lighter AI models. However, after AI models trained on cloud platforms are pruned or compressed, the accuracy rate of the models decreases. We might be able to develop an AI model with a 95% accuracy rate in a cloud environment, but it won’t be able to run on end devices.
3. Performance compromisation: For each AI model on mobile devices, we need to reach a balance among accuracy rate, latency, and power consumption. High accuracy rate, low latency, and low power consumption can’t be achieved at the same time. As a result, performance compromises are inevitable.


Advertages of Vertex AI NAS for AI model development


The neural architecture search technology was first developed by the Google Brain team in 2017 to create AI trained to optimize the performance of neural networks according to developers’ needs. By automatically discovering and designing the best architecture for a neural network for a specific task, the neural architecture search technology enables developers to more easily achieve better AI model performance.

Vertex AI NAS is currently the only fully-managed neural architecture search service available on a public cloud platform. As OPPO’s machine learning platform StarFire is cloud-based, we can easily connect Vertex AI NAS with our platform to develop AI models. On top of that, we chose to adopt Vertex AI NAS for on-device AI model development because of the following three advantages:

1. Automated neural network design:As mentioned, developing AI models on mobile devices can be labor intensive and time consuming. Because the neural network design is automated through Vertex AI NAS, we can greatly reduce development time and easily adapt an AI model to different mobile chips.

2. Custom reward parameters: Vertex AI NAS supports custom reward parameters, which is rare among the NAS tools. This means that we can freely add the search constraints that we need our AI models to be optimized for. By leveraging this feature, we have added power as a search constraint and successfully lowered the energy consumption of our AI model on mobile devices by 27%.

3. No need to compress AI models for mobile devices: Based on the real-time rewards sent back from the connected mobile chips, Vertex AI NAS can directly design a neural network architecture suitable for mobile devices. The end result can be run on end devices without being further processed, which saves time and effort for AI model adaptation.



How OPPO uses Vertex AI NAS to enhance energy efficency of AI models on mobile devices


Lowering power consumption is key to providing excellent user experience for AI models on mobile devices, particularly the computing intensive models related to multimedia and image processing. If an AI model consumes too much power, mobile devices can overheat and quickly drain their battery life. That is why the primary aim of using Vertex AI NAS for OPPO is to boost energy efficiency of AI processing on mobile devices.

To achieve this goal, we first added power as a custom search constraint to Vertex AI NAS, which only supports latency and memory rewards by default. This way, Vertex AI NAS can search neural networks based on the rewards of power, latency, and memory, letting us reduce power consumption of our AI models while reaching our desired levels of latency and memory consumption.

Then, we connected the StarFire platform with Vertex AI NAS through Cloud Storage. At the same time, StarFire is linked with a smartphone equipped with Qualcomm’s Snapdragon 8 Gen 2 chipset through the SDK provided by Qualcomm. Under this structure, Vertex AI NAS can constantly send the latest neural network architecture via Cloud Storage to StarFire, which then exports the model to the chipset for testing. The test results are sent back to Vertex AI NAS again through StarFire and Cloud Storage, allowing it to conduct the next round of architecture search based on the rewards.

This process was repeated until we achieved our target. In the end, we realized a 27% reduction in power of our AI model and a 40% reduction in computing latency, while maintaining the same level of accuracy rate before the optimization.

StarFire的智能终端AI能力进化之路



Broadening the application range

The first successful AI model optimization through Vertex AI NAS is truly exciting for us. We plan to deploy this energy efficient AI model on our future smartphones, and implement the same model training process supported by Vertex AI NAS in the algorithm development of our other AI products. Besides power, we also hope to add other reward parameters, such as bandwidth and operator friendliness, as search constraints to Vertex AI NAS for more comprehensive model optimization.
Vertex AI NAS has significantly facilitated the optimization of our AI capabilities on smartphones, and we believe that there is still great potential to explore. We will continue collaborating with Google Cloud to expand our use of Vertex AI NAS. For the developers who are interested in adopting Vertex AI NAS, we advise targeting the most relevant hardware reward parameters before launching the development process, and becoming familiar with the ways to build search spaces if custom search constraints are needed.



Authors


Hongyu Li, Senior Algorithm Engineer, OPPO
Leslie Li, Head of AI Platform, OPPO
Special thanks to Yuwei Liu, Senior Hardware Engineer at OPPO, for contributing to this post.
END
About AndesBrain

安第斯智能云

OPPO 安第斯智能云(AndesBrain)是服务个人、家庭与开发者的泛终端智能云,致力于“让终端更智能”。作为 OPPO 三大核心技术之一,安第斯智能云提供端云协同的数据存储与智能计算服务,是万物互融的“数智大脑”。

原文始发于微信公众号(安第斯智能云):StarFire的智能终端AI能力进化之路

  • 左青龙
  • 微信扫一扫
  • weinxin
  • 右白虎
  • 微信扫一扫
  • weinxin
admin
  • 本文由 发表于 2023年5月6日10:58:37
  • 转载请保留本文链接(CN-SEC中文网:感谢原作者辛苦付出):
                   StarFire的智能终端AI能力进化之路https://cn-sec.com/archives/1709643.html

发表评论

匿名网友 填写信息