Skip to content

Automate

edgemark.models.automate.automate

main

main(cfg_path=config_file_path, **kwargs)

Compile, upload, and test the models on the hardware device.

Parameters:

Name Type Description Default
cfg_path str

The path to the configuration file. The configuration file that this path points to should contain the following keys: - software_platform (str): The software platform to be tested. It should be one of the following: ['TFLM', 'EI', 'Ekkono', 'eAI_Translator'] - hardware_platform (str): The hardware platform to be tested. - linkers_dir (str): The directory containing the linker files. - save_dir (str): The directory to save the benchmarking results. - benchmark_overall_timeout (float): The overall timeout for reading the benchmark output in seconds. - benchmark_silence_timeout (float): The silence timeout for reading the benchmark output in seconds.

config_file_path
**kwargs dict

Keyword arguments to be passed to the configuration file.

{}

Returns:

Type Description
list

A list of dictionaries containing the following keys for each target model: - dir (str): The directory of the target model. - result (str): Result of the model generation. It can be either "success" or "failed". - error (str): Error message in case of failure. - traceback (str): Traceback in case of failure. Either this or 'error_file' will be present. - error_file (str): The path to the error file in case of failure. Either this or 'traceback' will be present.

Source code in edgemark/models/automate/automate.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
def main(cfg_path=config_file_path, **kwargs):
    """
    Compile, upload, and test the models on the hardware device.

    Args:
        cfg_path (str): The path to the configuration file.
            The configuration file that this path points to should contain the following keys:
                - software_platform (str): The software platform to be tested. It should be one of the following: ['TFLM', 'EI', 'Ekkono', 'eAI_Translator']
                - hardware_platform (str): The hardware platform to be tested.
                - linkers_dir (str): The directory containing the linker files.
                - save_dir (str): The directory to save the benchmarking results.
                - benchmark_overall_timeout (float): The overall timeout for reading the benchmark output in seconds.
                - benchmark_silence_timeout (float): The silence timeout for reading the benchmark output in seconds.
        **kwargs (dict): Keyword arguments to be passed to the configuration file.

    Returns:
        list: A list of dictionaries containing the following keys for each target model:
            - dir (str): The directory of the target model.
            - result (str): Result of the model generation. It can be either "success" or "failed".
            - error (str): Error message in case of failure.
            - traceback (str): Traceback in case of failure. Either this or 'error_file' will be present.
            - error_file (str): The path to the error file in case of failure. Either this or 'traceback' will be present.
    """
    cfg = OmegaConf.load(cfg_path)
    cfg.update(OmegaConf.create(kwargs))

    assert cfg.software_platform is not None, "The software platform must be provided."
    assert cfg.software_platform in ["TFLM", "EI", "Ekkono", "eAI_Translator"], "The software platform must be either TFLM, EI, Ekkono, or eAI_Translator."
    assert cfg.hardware_platform is not None, "The hardware platform must be provided."

    spec = importlib.util.spec_from_file_location("imported_module", cfg.hardware_platform_path)
    imported_module = importlib.util.module_from_spec(spec)
    spec.loader.exec_module(imported_module)
    hardware = imported_module.Hardware(cfg.software_platform)

    if cfg.software_platform == "TFLM":
        targets = OmegaConf.load(os.path.join(cfg.linkers_dir, "tflm_converted_models_list.yaml"))
    elif cfg.software_platform == "EI":
        targets = OmegaConf.load(os.path.join(cfg.linkers_dir, "ei_converted_models_list.yaml"))
    elif cfg.software_platform == "Ekkono":
        targets = OmegaConf.load(os.path.join(cfg.linkers_dir, "ekkono_converted_models_list.yaml"))
    elif cfg.software_platform == "eAI_Translator":
        targets = OmegaConf.load(os.path.join(cfg.linkers_dir, "translator_converted_models_list.yaml"))

        # check if the user has put the eAI_Translator files in the target directories
        eAI_Translator_dirs_exist = True
        for target_dir in targets:
            if not os.path.exists(os.path.join(target_dir, "Translator")):
                eAI_Translator_dirs_exist = False
                break

        if not eAI_Translator_dirs_exist:
            print("Please use the Renesas eAI_Translator and convert TFLite models to eAI_Translator files.")
            print("You can skip a model by creating an empty 'Translator' folder in the target directory.")

        while not eAI_Translator_dirs_exist:
            eAI_Translator_dirs_exist = True
            for target_dir in targets:
                if not os.path.exists(os.path.join(target_dir, "Translator")):
                    eAI_Translator_dirs_exist = False
                    print("")
                    print("TFLite model should be found in: {}".format(target_dir.replace("eAI_Translator", "tflite")))
                    print("eAI_Translator model should be placed in: {}".format(target_dir))

            if not eAI_Translator_dirs_exist:
                print("")
                input("Press Enter when you have put the eAI_Translator files in the target directories ...")

        for target_dir in targets:
            if os.path.exists(os.path.join(target_dir, "Translator")) and not os.listdir(os.path.join(target_dir, "Translator")):
                targets.remove(target_dir)

    if cfg.software_platform == "TFLM":
        replacing_items = ["model.h", "model.cpp", "data.h", "data.cpp"]
    elif cfg.software_platform == "EI":
        replacing_items = ["model", "data.h", "data.cpp"]
    elif cfg.software_platform == "Ekkono":
        replacing_items = ["model.h", "model.c", "data.h", "data.c"]
    elif cfg.software_platform == "eAI_Translator":
        replacing_items = ["Translator", "data.h", "data.c"]

    output = [{"dir": target_dir} for target_dir in targets]

    benchmarking_result = []
    for i, target_dir in enumerate(targets):
        try:
            target_dir = target_dir.replace("\\", "/")
            save_dir = os.path.join(target_dir, "benchmark_result", cfg.software_platform + " + " + cfg.hardware_platform)

            title = "Testing the model in {} ({}/{})".format(target_dir, i+1, len(targets))
            print("\n")
            print("="*110)
            print("-"*((110-len(title)-2)//2), end=" ")
            print(title, end=" ")
            print("-"*((110-len(title)-2)//2))
            print("="*110)

            print("Placing the model's files/folders ...", end=" ", flush=True)
            destination_dir = hardware.get_model_dir()
            _placer(target_dir, destination_dir, replacing_items)

            if cfg.software_platform == "EI":
                if cfg.hardware_platform == "NUCLEO-L4R5ZI":
                    # remove all folders inside "{destination_dir}/model/edge-impulse-sdk/porting" except "stm32-cubeai"
                    for subdir in next(os.walk(os.path.join(destination_dir, "model/edge-impulse-sdk/porting")))[1]:
                        if subdir not in "stm32-cubeai":
                            # shutil.rmtree(os.path.join(destination_dir, "model/edge-impulse-sdk/porting", subdir))
                            _delete_files_in_dir(os.path.join(destination_dir, "model/edge-impulse-sdk/porting", subdir))
                elif cfg.hardware_platform == "RenesasRX65N":
                    # remove all folders inside "{destination_dir}/model/edge-impulse-sdk/porting"
                    for subdir in next(os.walk(os.path.join(destination_dir, "model/edge-impulse-sdk/porting")))[1]:
                        # shutil.rmtree(os.path.join(destination_dir, "model/edge-impulse-sdk/porting", subdir))
                        _delete_files_in_dir(os.path.join(destination_dir, "model/edge-impulse-sdk/porting", subdir))
                    _placer(os.path.dirname(cfg.EI_general_porting_dir), os.path.join(destination_dir, "model/edge-impulse-sdk/porting"), ["general"])

            print("Done")

            # find the best arena_size for the TFLM model. Although it's not general, we'll do it for TFLM :)
            if cfg.arena_finder and cfg.software_platform == "TFLM":
                print("Finding the best arena size ...")

                model_header_path = os.path.join(destination_dir, "model.h")

                original_arena_size = None
                with open(model_header_path, "r") as f:
                    pattern = r"#define ARENA_SIZE (\d+)"
                    matches = re.findall(pattern, f.read())
                    if matches:
                        original_arena_size = int(matches[0])

                if original_arena_size is not None:
                    arena_size = original_arena_size - 10240 + 2048     # reducing 10kB that was added for safety and adding 2kB have a bit of room for breath
                else:
                    arena_size = 4096

                if arena_size < 16384:
                    search_resolution = 512
                elif arena_size < 65536:
                    search_resolution = 1024
                else:
                    search_resolution = 2048

                recommender = _arena_size_recommender(arena_size, search_resolution)

                founded_arena = None
                while True:
                    arena_size, recommender_status = recommender.recommend()

                    if recommender_status == 1:
                        print("The best arena size is found to be: {}".format(arena_size))
                        founded_arena = arena_size
                        _arena_placer(model_header_path, arena_size)
                        break

                    if recommender_status == -1:
                        print("The best arena size cannot be found")
                        _arena_placer(model_header_path, original_arena_size)
                        break

                    print("Building the project with arena size of {} ...".format(arena_size), end=" ", flush=True)
                    _arena_placer(model_header_path, arena_size)
                    try:
                        text_size, data_size, bss_size = hardware.build_project()
                        print("Done")
                    except hardware.RAMExceededError:
                        print("Failed")
                        recommender.update(arena_size, -1)
                        continue
                    except Exception:
                        print("Failed")
                        print("Unknown build error!")
                        _arena_placer(model_header_path, original_arena_size)
                        break

                    print("Uploading the program ...", end=" ", flush=True)
                    try:
                        hardware.upload_program()
                        print("Done")
                    except Exception:
                        print("Failed")
                        _arena_placer(model_header_path, original_arena_size)
                        break

                    print("Reading the output ...", end=" ", flush=True)

                    try:
                        hardware.read_output(overall_timeout=cfg.benchmark_overall_timeout, silence_timeout=cfg.benchmark_silence_timeout, keyword="Benchmark end")
                        recommender.update(arena_size, 0)
                        print("Done")
                    except hardware.BoardNotFoundError:
                        print("Failed")
                        print("Board not found!")
                        _arena_placer(model_header_path, original_arena_size)
                        break
                    except TimeoutError as e:
                        if "missing" in str(e) or "Too many buffers" in str(e):
                            print("Failed")
                            recommender.update(arena_size, 1)
                            continue
                        else:
                            print("Failed")
                            print("Receiving output timeout!")
                            _arena_placer(model_header_path, original_arena_size)
                            break
                    except Exception:
                        print("Failed")
                        print("Unknown error!")
                        _arena_placer(model_header_path, original_arena_size)
                        break
                print("")

            print("Building the project ...", end=" ", flush=True)
            try:
                text_size, data_size, bss_size = hardware.build_project()
                print("Done")

            except hardware.RAMExceededError as e:
                print("Failed")
                shutil.rmtree(save_dir, ignore_errors=True)
                os.makedirs(save_dir, exist_ok=True)
                with open(os.path.join(save_dir, "build_error.txt"), "w") as f:
                    f.write(str(e))
                output[i]["result"] = "failed"
                output[i]["error"] = "RAM size exceeded"
                output[i]["error_file"] = os.path.join(save_dir, "build_error.txt")
                continue

            except hardware.FlashExceededError as e:
                print("Failed")
                shutil.rmtree(save_dir, ignore_errors=True)
                os.makedirs(save_dir, exist_ok=True)
                with open(os.path.join(save_dir, "build_error.txt"), "w") as f:
                    f.write(str(e))
                output[i]["result"] = "failed"
                output[i]["error"] = "Flash size exceeded"
                output[i]["error_file"] = os.path.join(save_dir, "build_error.txt")
                continue

            except Exception as e:
                print("Failed")
                shutil.rmtree(save_dir, ignore_errors=True)
                os.makedirs(save_dir, exist_ok=True)
                with open(os.path.join(save_dir, "build_error.txt"), "w") as f:
                    f.write(str(e))
                output[i]["result"] = "failed"
                output[i]["error"] = "Build failed"
                output[i]["error_file"] = os.path.join(save_dir, "build_error.txt")
                continue

            result = {"text_size": text_size, "data_size": data_size, "bss_size": bss_size}

            if cfg.software_platform == "TFLM":
                result.update({"arena_finder": cfg.arena_finder})
                if cfg.arena_finder:
                    result.update({"arena_resolution": search_resolution, "founded_arena": founded_arena})

            print("Uploading the program ...", end=" ", flush=True)
            try:
                hardware.upload_program()
                print("Done")
            except Exception as e:
                print("Failed")
                shutil.rmtree(save_dir, ignore_errors=True)
                os.makedirs(save_dir, exist_ok=True)
                with open(os.path.join(save_dir, "upload_error.txt"), "w") as f:
                    f.write(str(e))
                output[i]["result"] = "failed"
                output[i]["error"] = "Upload failed"
                output[i]["error_file"] = os.path.join(save_dir, "upload_error.txt")
                continue

            print("Reading the output ...", end=" ", flush=True)

            try:
                benchmark_output = hardware.read_output(overall_timeout=cfg.benchmark_overall_timeout, silence_timeout=cfg.benchmark_silence_timeout, keyword="Benchmark end")
                print("Done")

            except hardware.BoardNotFoundError as e:
                print("Failed")
                shutil.rmtree(save_dir, ignore_errors=True)
                os.makedirs(save_dir, exist_ok=True)
                with open(os.path.join(save_dir, "connection_error.txt"), "w") as f:
                    f.write(str(e))
                output[i]["result"] = "failed"
                output[i]["error"] = "Board not found"
                output[i]["error_file"] = os.path.join(save_dir, "connection_error.txt")
                continue

            except TimeoutError as e:
                print("Failed")
                shutil.rmtree(save_dir, ignore_errors=True)
                os.makedirs(save_dir, exist_ok=True)
                with open(os.path.join(save_dir, "timeout_error.txt"), "w") as f:
                    f.write(str(e))
                output[i]["result"] = "failed"
                if "missing" in str(e) or "Too many buffers" in str(e):
                    output[i]["error"] = "Arena size is too small"
                else:
                    output[i]["error"] = "Receiving output timeout"
                output[i]["error_file"] = os.path.join(save_dir, "timeout_error.txt")
                continue

            except Exception as e:
                print("Failed")
                shutil.rmtree(save_dir, ignore_errors=True)
                os.makedirs(save_dir, exist_ok=True)
                with open(os.path.join(save_dir, "benchmark_output_error.txt"), "w") as f:
                    f.write(str(e))
                output[i]["result"] = "failed"
                output[i]["error"] = "Benchmark output error"
                output[i]["error_file"] = os.path.join(save_dir, "benchmark_output_error.txt")
                continue

            result.update({"serial_output": benchmark_output})

            n_timing_tests, avg_ms, std_ms, avg_ticks, std_ticks = find_exe_time(benchmark_output)
            result.update({
                "n_timing_tests": n_timing_tests,
                "avg_ms": float(avg_ms) if avg_ms is not None else None,
                "std_ms": float(std_ms) if std_ms is not None else None,
                "avg_ticks": float(avg_ticks) if avg_ticks is not None else None,
                "std_ticks": float(std_ticks) if std_ticks is not None else None
            })

            n_accuracy_tests, avg_mae, std_mae = find_prediction_mae(benchmark_output)
            result.update({
                "n_accuracy_tests": n_accuracy_tests,
                "avg_mae": float(avg_mae) if avg_mae is not None else None,
                "std_mae": float(std_mae) if std_mae is not None else None
            })

            benchmarking_element = result.copy()
            benchmarking_element.update({
                "model_name": _find_model_name(target_dir),
                "model_type": _find_model_type(target_dir),
                "model_directory": target_dir
            })
            benchmarking_result.append(benchmarking_element)

            # 'data' section is taken by our samples data and platforms don't affect it
            # (TFLM and Ekkono don't, but EI will slightly affect it which we ignore).
            # So, we can safely remove it from Flash and RAM sizes.
            print("Benchmarking result:")
            print("Flash size: {} bytes".format(text_size))
            print("RAM size: {} bytes".format(bss_size))

            if n_timing_tests > 0:
                print("n_timing_tests: {}".format(n_timing_tests))
                print("avg_ms: {} ms".format(avg_ms))
                print("std_ms: {} ms".format(std_ms))
                print("avg_ticks: {}".format(avg_ticks))
                print("std_ticks: {}".format(std_ticks))
            else:
                print("No timing information was found")

            if n_accuracy_tests > 0:
                print("n_accuracy_tests: {}".format(n_accuracy_tests))
                print("avg_mae: {}".format(avg_mae))
                print("std_mae: {}".format(std_mae))
            else:
                print("No accuracy information was found")

            shutil.rmtree(save_dir, ignore_errors=True)
            os.makedirs(save_dir, exist_ok=True)
            with open(os.path.join(save_dir, "result.yaml"), "w") as f:
                yaml.dump(result, f, indent=4, sort_keys=False)

            output[i]["result"] = "success"

        except Exception as e:
            output[i]["result"] = "failed"
            output[i]["error"] = type(e).__name__
            output[i]["traceback"] = traceback.format_exc()
            print("Error:")
            print(traceback.format_exc())

    _save_to_excel(benchmarking_result, cfg.software_platform, cfg.save_path)

    test_name = os.path.basename(cfg.save_path)
    test_name = os.path.splitext(test_name)[0]
    figures_save_dir = os.path.join(os.path.dirname(cfg.save_path), "figures", test_name)
    result_plotter(cfg.save_path, figures_save_dir)

    return output

edgemark.models.automate.investigator

find_exe_time

find_exe_time(text)

Find the execution time from the text.

Parameters:

Name Type Description Default
text str

The text to search.

required

Returns:

Type Description
tuple

A tuple of (n_tests, avg_ms, std_ms, avg_ticks, std_ticks).

Source code in edgemark/models/automate/investigator.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
def find_exe_time(text):
    """
    Find the execution time from the text.

    Args:
        text (str): The text to search.

    Returns:
        tuple: A tuple of (n_tests, avg_ms, std_ms, avg_ticks, std_ticks).
    """
    ms = []
    ticks = []

    pattern = r"Execution time: ([\d.]+) ms \((\d+) ticks\)"
    matches = re.findall(pattern, text)

    for match in matches:
        ms.append(float(match[0]))
        ticks.append(int(match[1]))

    pattern = r"Execution time: (\d+) ticks"
    matches = re.findall(pattern, text)

    for match in matches:
        ticks.append(int(match))

    ms = np.array(ms)
    ticks = np.array(ticks)

    if len(ms) > 0:
        assert len(ticks) == len(ms)
    n_tests = len(ticks)

    if len(ms) > 0:
        avg_ms = np.mean(ms)
        std_ms = np.std(ms)
    else:
        avg_ms = None
        std_ms = None

    if len(ticks) > 0:
        avg_ticks = np.mean(ticks)
        std_ticks = np.std(ticks)
    else:
        avg_ticks = None
        std_ticks = None

    return n_tests, avg_ms, std_ms, avg_ticks, std_ticks

find_prediction_mae

find_prediction_mae(text)

Find the mean absolute error (MAE) from the text.

Parameters:

Name Type Description Default
text str

The text to search.

required

Returns:

Type Description
tuple

A tuple of (n_tests, avg_mae, std_mae).

Source code in edgemark/models/automate/investigator.py
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
def find_prediction_mae(text):
    """
    Find the mean absolute error (MAE) from the text.

    Args:
        text (str): The text to search.

    Returns:
        tuple: A tuple of (n_tests, avg_mae, std_mae).
    """
    maes = []

    y_expected_min = None
    y_expected_max = None
    for line in text.split("\n"):
        pattern = r"\[(-?[\d.]+), (-?[\d.]+)\]"
        matches = re.findall(pattern, line)

        y_expected = []
        y_predicted = []
        for match in matches:
            y_expected.append(float(match[0]))
            y_predicted.append(float(match[1]))
            if y_expected_min is None or float(match[0]) < y_expected_min:
                y_expected_min = float(match[0])
            if y_expected_max is None or float(match[0]) > y_expected_max:
                y_expected_max = float(match[0])

        if len(y_expected) > 0:
            maes.append(np.abs(np.array(y_expected) - np.array(y_predicted)))

    maes = np.array(maes)
    if y_expected_min is not None and y_expected_max is not None and (y_expected_max - y_expected_min) > 0:
        maes = maes / (y_expected_max - y_expected_min)

    n_tests = len(maes)
    if len(maes) > 0:
        avg_mae = np.mean(maes)
        std_mae = np.std(maes)
    else:
        avg_mae = None
        std_mae = None

    return n_tests, avg_mae, std_mae

edgemark.models.automate.hardware_template

This module contains a template class that other hardware classes should inherit from.

HardwareTemplate

This class is a template for hardware classes. In order to create a new hardware class, you should inherit from this class and implement its abstract functions.

Source code in edgemark/models/automate/hardware_template.py
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
class HardwareTemplate:
    """
    This class is a template for hardware classes. In order to create a new hardware class,
    you should inherit from this class and implement its abstract functions.
    """

    class RAMExceededError(Exception):
        """
        Exception raised when the hardware's required RAM usage exceeds its size.
        """
        pass


    class FlashExceededError(Exception):
        """
        Exception raised when the hardware's required flash usage exceeds its size.
        """
        pass


    class BoardNotFoundError(Exception):
        """
        Exception raised when the board is not found.
        """
        pass


    def __init__(self, software_platform):
        """
        Initializes the hardware class.

        Args:
            software_platform (str): The software platform that will be used with this hardware.
        """
        pass


    def get_model_dir(self):
        """
        Returns the directory where the model files can go.

        Returns:
            str: The directory where the model files can go.
        """
        raise NotImplementedError


    def build_project(self, clean=False):
        """
        Builds the project.

        Args:
            clean (bool): Whether to clean the project before building.

        Returns:
            tuple: A tuple containing text_size, data_size, and bss_size.

        Raises:
            HardwareTemplate.RAMExceededError: If the hardware's required RAM usage exceeds its size.
            Exception: If for other reasons, the project cannot be built.
        """
        raise NotImplementedError


    def upload_program(self):
        """
        Uploads the program to the hardware.

        Raises:
            Exception: If the program cannot be uploaded.
        """
        raise NotImplementedError


    @staticmethod
    def read_output(overall_timeout, silence_timeout, keyword=None, verbose=False):
        """
        Reads the output from the hardware.

        Args:
            overall_timeout (int): The overall timeout for reading.
            silence_timeout (int): The silence timeout for reading.
            keyword (str): The keyword to stop reading at.
            verbose (bool): Whether to print the output as it is read.

        Returns:
            str: The output read from the hardware.

        Raises:
            HardwareTemplate.BoardNotFoundError: If the board is not found.
            TimeoutError: If reading times out.
            Exception: If for other reasons, the output is not complete.
        """
        raise NotImplementedError

BoardNotFoundError

Bases: Exception

Exception raised when the board is not found.

Source code in edgemark/models/automate/hardware_template.py
25
26
27
28
29
class BoardNotFoundError(Exception):
    """
    Exception raised when the board is not found.
    """
    pass

FlashExceededError

Bases: Exception

Exception raised when the hardware's required flash usage exceeds its size.

Source code in edgemark/models/automate/hardware_template.py
18
19
20
21
22
class FlashExceededError(Exception):
    """
    Exception raised when the hardware's required flash usage exceeds its size.
    """
    pass

RAMExceededError

Bases: Exception

Exception raised when the hardware's required RAM usage exceeds its size.

Source code in edgemark/models/automate/hardware_template.py
11
12
13
14
15
class RAMExceededError(Exception):
    """
    Exception raised when the hardware's required RAM usage exceeds its size.
    """
    pass

__init__

__init__(software_platform)

Initializes the hardware class.

Parameters:

Name Type Description Default
software_platform str

The software platform that will be used with this hardware.

required
Source code in edgemark/models/automate/hardware_template.py
32
33
34
35
36
37
38
39
def __init__(self, software_platform):
    """
    Initializes the hardware class.

    Args:
        software_platform (str): The software platform that will be used with this hardware.
    """
    pass

build_project

build_project(clean=False)

Builds the project.

Parameters:

Name Type Description Default
clean bool

Whether to clean the project before building.

False

Returns:

Type Description
tuple

A tuple containing text_size, data_size, and bss_size.

Raises:

Type Description
RAMExceededError

If the hardware's required RAM usage exceeds its size.

Exception

If for other reasons, the project cannot be built.

Source code in edgemark/models/automate/hardware_template.py
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
def build_project(self, clean=False):
    """
    Builds the project.

    Args:
        clean (bool): Whether to clean the project before building.

    Returns:
        tuple: A tuple containing text_size, data_size, and bss_size.

    Raises:
        HardwareTemplate.RAMExceededError: If the hardware's required RAM usage exceeds its size.
        Exception: If for other reasons, the project cannot be built.
    """
    raise NotImplementedError

get_model_dir

get_model_dir()

Returns the directory where the model files can go.

Returns:

Type Description
str

The directory where the model files can go.

Source code in edgemark/models/automate/hardware_template.py
42
43
44
45
46
47
48
49
def get_model_dir(self):
    """
    Returns the directory where the model files can go.

    Returns:
        str: The directory where the model files can go.
    """
    raise NotImplementedError

read_output staticmethod

read_output(overall_timeout, silence_timeout, keyword=None, verbose=False)

Reads the output from the hardware.

Parameters:

Name Type Description Default
overall_timeout int

The overall timeout for reading.

required
silence_timeout int

The silence timeout for reading.

required
keyword str

The keyword to stop reading at.

None
verbose bool

Whether to print the output as it is read.

False

Returns:

Type Description
str

The output read from the hardware.

Raises:

Type Description
BoardNotFoundError

If the board is not found.

TimeoutError

If reading times out.

Exception

If for other reasons, the output is not complete.

Source code in edgemark/models/automate/hardware_template.py
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
@staticmethod
def read_output(overall_timeout, silence_timeout, keyword=None, verbose=False):
    """
    Reads the output from the hardware.

    Args:
        overall_timeout (int): The overall timeout for reading.
        silence_timeout (int): The silence timeout for reading.
        keyword (str): The keyword to stop reading at.
        verbose (bool): Whether to print the output as it is read.

    Returns:
        str: The output read from the hardware.

    Raises:
        HardwareTemplate.BoardNotFoundError: If the board is not found.
        TimeoutError: If reading times out.
        Exception: If for other reasons, the output is not complete.
    """
    raise NotImplementedError

upload_program

upload_program()

Uploads the program to the hardware.

Raises:

Type Description
Exception

If the program cannot be uploaded.

Source code in edgemark/models/automate/hardware_template.py
69
70
71
72
73
74
75
76
def upload_program(self):
    """
    Uploads the program to the hardware.

    Raises:
        Exception: If the program cannot be uploaded.
    """
    raise NotImplementedError