TFLM vs Renesas eAI Translator
The goal of this study is to compare the performance of TensorFlow Lite for Microcontrollers (TFLM) and Renesas eAI Translator.
According to the eAI Translator's documentation, it should be able to convert RNN models. However, we faced some errors during the conversion (1). As a result, this study focuses on FC, CNN, and TinyMLPerf models. Also, eAI Translator is designed to work with Renesas boards, so we only have the RenesasRX65N board in our study.
Still, we guess it should be possible to convert RNN models with some modifications and under certain conditions.
Model Type:
Models
Error
Execution Time
Flash Size
RAM Usage
Summary
-
Model Correctness:
-
Some models failed to run on the board. (1)
For the failed FC models, the program halts for an unknown reason. For the failed TinyMLPerf models, the program is too large for the RenesasRX65N board.
-
The error rate of the TFLM and eAI Translator models are normally the same, but for some big models, the eAI Translator has an unacceptable error. (1)
Namely, CNN_5, CNN_6, CNN_7, TinyMLPerf_MBNet, and TinyMLPerf_ResNet models.
-
-
Execution Time:
-
Please note that we have not utilized CMSIS-NN with TFLM. As a result, the int8 only version of TFLM could potentially yield better results. Thus, we have excluded the int8 only variants from our execution time comparisons.
-
eAI Translator is usually better. (1)
eAI Translator is better especially for smaller models. For bigger ones the two are almost the same, or TFLM might even get a bit better.
-
-
Flash Size: eAI Translator is better.
-
RAM Usage: eAI Translator is slightly better.
-
Conclusion: If the error of the eAI Translator is acceptable, it is a better choice than TFLM for RenesasRX65N.





























