Maix-III AXera-Pi 试试 Python 编程
时间 | 负责人 | 更新内容 |
---|---|---|
2022.12.02 | lyx | 初次编写文档 |
2022.12.15 | lyx | 增加内容 |
2023.01.04 | lyx | 增加人脸/车牌识别、Yolov6等新模型 |
2023.01.29 | lyx | 补充细节说明 |
2023.02.23 | lyx | 增加 YOLOV8 、 YOLOV8-SEG 、ax_bvc_det (人车非检测)、crowdcount (人流量统计)等模型 |
基于前文的 产品上手指南 以及 系统使用手册 后,本文章会介绍如何在 MAIX-III AXera-Pi 板卡上体验 Python 编程!
前言
从 MIAX-I MCU、MAIX-II SOC、再到 MAIX-III Linux 系列,SIPEED 一直在致力于让更多的人轻松快捷的把板卡使用起来,所以衍生了各种各样的上手指南文档,板卡也在增加着更多形式的编程体验,Linux 基础差不熟悉?C++ 有难度?调用 AI 模型困难?没有关系,那就由这篇适合初学者的 Python 编程文档来帮助你快速把 AXera-Pi 板卡用起来并调用 AI 模型应用。
在进入到使用 Python 编程之前,我们需要先认识要使用的语言以及工具,一起往下看吧。
在 AXera-Pi 板卡内置了 Python3、Jupyter Notebook、Pinpong、Pillow
等包让用户更便捷的进行使用 Python 编程,本篇文档以在 Jupyter Notebook
网页上进行 Python 编程为示例阐述。
Python 是一种广泛使用的解释型、高级和通用的编程语言。它支持多种编程范型包括函数式、指令式、反射式、结构化和面向对象编程,还拥有动态类型系统和垃圾回收功能,能够自动管理内存使用,并且其本身拥有一个巨大而广泛的标准库。
Python 与 C++ 相比有什么区别?
从上文得知 Python 是一种解释型语言,用户不需要编译以扩展名为
.py
的代码可直接传递给解释器生成输出。而 C++ 是编译型语言,编译器需要把源代码生成目标代码再执行生成输出。对于初学者来说 Python 更易于学习并且语法简单、可读性更强。而 C++ 在系统编程及性能上更优胜,但语法复杂编写起来对初学者有一定的挑战难度。
Python 基础及入门学习
在使用 Jupyter Notebook 进行 Python 编程之前,同学们需要掌握一定的 Python 语言的基础才能接着往下走,可根据下列提供的传送门进行学习。
以下的文章适合有一定 Python 基础想深入的同学们:
查询完输入 jupyter notebook
命令启动它,终端会返回一系列服务器的信息。
上图可以看到终端返回服务器信息,打开任意浏览器输入板卡 IP 地址后缀加上 :8888
即可直接访问网页端(但请注意:lo:127.0.0.1 此 IP 是不可用于访问)网页端会提醒输入密码 root
后访问。
注意:使用 Jupyter Notebook 时终端需保持连接状态,否则会与本地服务器的连接断开而无法操作。
输入后会跳转到 Files
的页面,点击右侧的 New
可选择符合需求的编辑环境。
txt
Python3:默认的python3 kernel
Text File:新建一个文本文件
Folder:新建一个文件夹
Terminal:在浏览器中新建一个用户终端,类似于 shell/adb 终端.
运行代码
本篇文章所有的示例代码都是摄像头 GC4653 为例,如有 OS04A10 型号请前往 Maix-III 系列 AXera-Pi 常见问题(FAQ)修改。
用户选择 Python3
环境即可进入编辑页面,在网页上运行 Python 代码有以下三种示例供大家参考,代码运行后编辑框下会打印输出结果参数,用户则可以从板卡设备屏幕观察到运行实时效果。
- 使用
! + cmd
运行内置的脚本或命令行,或是直接在框内编辑Python
代码并点击运行,这里以运行NPU
应用为例。
!ls home/images
air.jpg carvana02.jpg face5.jpg o2_resize.jpg ssd_car.jpg
aoa-2.jpeg carvana03.jpg grace_hopper.jpg pineapple.jpg ssd_dog.jpg
aoa.jpeg carvana04.jpg mobileface01.jpg pose-1.jpeg ssd_horse.jpg
bike.jpg cat.jpg mobileface02.jpg pose-2.jpeg
bike2.jpg cityscape.png mtcnn_face4.jpg pose-3.jpeg
cable.jpg dog.jpg mtcnn_face6.jpg pose.jpg
carvana01.jpg efficientdet.png mv2seg.png selfie.jpg
!/home/ax-samples/build/install/bin/ax_yolov5s -m /home/models/yolov5s.joint -i /home/images/cat.jpg -r 10
--------------------------------------
model file : /home/models/yolov5s.joint
image file : /home/images/cat.jpg
img_h, img_w : 640 640
[AX_SYS_LOG] AX_SYS_Log2ConsoleThread_Start
Run-Joint Runtime version: 0.5.10
--------------------------------------
[INFO]: Virtual npu mode is 1_1
Tools version: d696ee2f
run over: output len 3
--------------------------------------
Create handle took 487.99 ms (neu 22.29 ms, axe 0.00 ms, overhead 465.70 ms)
--------------------------------------
Repeat 10 times, avg time 22.57 ms, max_time 22.88 ms, min_time 22.46 ms
--------------------------------------
detection num: 1
15: 89%, [ 167, 28, 356, 353], cat
[AX_SYS_LOG] Waiting thread(2867848448) to exit
[AX_SYS_LOG] AX_Log2ConsoleRoutine terminated!!!
exit[AX_SYS_LOG] join thread(2867848448) ret:0
from IPython.display import Image
Image("yolov5s_out.jpg")
- 也可以使用
%run
跑模块文件或.py
文件,这里以运行hello.py
为例。
%run hello.py
hello world!
或者是直接从网页端导入文件,点击右侧的 Upload
直接在任意目录下导入需要的文件即可。
- 如何导出我们在网页端编写的文件,以下面为例:
在网页端编写的的内容都可以以文档的形似输出,默认保存的是以后缀名为 .ipynb
的 json
格式,保存不同格式请点击 File
->Download as
->选择你需要的格式即可
,网页会自动下载到本地。
ax-pipeline-api
基于
20230214
后更新的镜像包,ax-pipeline-api
版本升级到 1.0.9 支持 Python 获取摄像头图像及推理结果一起显示到屏幕上或保存。
ax-pipeline-api:此项目基于 ax-pipeline 实现了 pybind11
和 ctypes
的 Python API,用户可使用 Python 调用内置的多种 AI 模型以及通用的 Python 库 pinpong、opencv、numpy、pillow 等,让 AXera-Pi 用起来更简单!
ctypes
和pybind11
两者区别?
ctypes
是最早适配的,相较于新的 pybind11
来说支持的接口更多稳定性更好,而新的 pybind11
则是输出效果更直观(可显示到屏幕/网页上)代码更任意理解,但两者因板卡外设有限不可混用。
- 使用前先安装获取
ax-pipeline-api
包
因 ax-pipeline-api
更新升级较频繁,用户在使用 Python 编程前可先在终端使用下方命令行更新包,确保自己使用是最新版本的即可。
!pip3 install ax-pipeline-api -U
Requirement already satisfied: ax-pipeline-api in /usr/local/lib/python3.9/dist-packages (1.0.7)
Collecting ax-pipeline-api
Using cached ax-pipeline-api-1.0.7.tar.gz (15.5 MB)
Using cached ax-pipeline-api-1.0.6.tar.gz (19.5 MB)
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/yolov5s.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
x, y, w, h = i['bbox']['x'], i['bbox']['y'], i['bbox']['w'], i['bbox']['h']
objname, objprob = i['objname'], i['prob']
print(objname, objprob, x, y, w, h)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.drop()
b'bottle' 0.5032062530517578 0.11116950958967209 0.40303218364715576 0.21706661581993103 0.5804097652435303
b'bottle' 0.7368168830871582 0.11036165803670883 0.4033470153808594 0.2250344455242157 0.5822746157646179
b'bottle' 0.7751374244689941 0.11194521933794022 0.3943900167942047 0.21920150518417358 0.5996366739273071
b'bottle' 0.7996727228164673 0.1136007010936737 0.4050813317298889 0.21979697048664093 0.579041600227356
b'bottle' 0.7779128551483154 0.11486124992370605 0.4005553424358368 0.21681155264377594 0.5885791778564453
b'bottle' 0.7506007552146912 0.11348568648099899 0.4008048176765442 0.2208258956670761 0.5853195190429688
b'bottle' 0.7448824644088745 0.11283267289400101 0.40612301230430603 0.22086872160434723 0.5755194425582886
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/yolov8.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
x, y, w, h = i['bbox']['x'], i['bbox']['y'], i['bbox']['w'], i['bbox']['h']
objname, objprob = i['objname'], i['prob']
print(objname, objprob, x, y, w, h)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.drop()
b'toilet' 0.4541160762310028 0.602770209312439 0.9111631512641907 0.16810722649097443 0.08513855934143066
b'toilet' 0.6902503967285156 0.606963574886322 0.9117961525917053 0.16024480760097504 0.08727789670228958
b'toilet' 0.6852353811264038 0.6020327210426331 0.9118891358375549 0.16942621767520905 0.08718493580818176
b'toilet' 0.7014157176017761 0.6041151881217957 0.9120386242866516 0.16582755744457245 0.0863698348402977
b'cup' 0.46080872416496277 0.6049922108650208 0.9143685698509216 0.1643451750278473 0.08425315469503403
从上面跑的 yolov8 模型示例来看,运行时编辑框下会输出检测结果的相关参数,而实际画面可在板卡屏幕查看。我们也可以从上面的代码中替换不同功能的 .so
库或不同效果的 AI
模型来实现更多的 AI 应用。
更换
.so
库以及AI
模型请参考以下示例进行修改,这篇文档只挑选部分经典模型作为示例,大家可以举一反三进行更换功能库以及模型,更多详细信息前往 ax-pipeline-api 查看。全文示例使用GC4653
摄像头不同型号请前往 Maix-III 系列 AXera-Pi 常见问题(FAQ)修改。
- 以下是内置的 libxxx*.so 库总览:
可通过替换不同的 libxxx*.so
来体验不同的功能。
libsample_h264_ivps_joint_vo_sipy.so # input h264 video to ivps joint output screen vo
libsample_v4l2_user_ivps_joint_vo_sipy.so # input v4l2 /dev/videoX to ivps joint output screen vo
libsample_rtsp_ivps_joint_rtsp_vo_sipy.so # input video from rtsp to ivps joint output rtsp and screen vo
libsample_vin_ivps_joint_vo_sipy.so # input mipi sensor to ivps joint output screen vo
libsample_vin_ivps_joint_venc_rtsp_sipy.so # input mipi sensor to ivps joint output rtsp
libsample_vin_ivps_joint_venc_rtsp_vo_sipy.so # input mipi sensor to ivps joint output rtsp and screen vo
libsample_vin_ivps_joint_vo_h265_sipy.so # input mipi sensor to ivps joint output screen vo and save h265 video file
libsample_multi_rtsp_ivps_joint_multi_rtsp_sipy.so # input multi rtsp video to ivps joint output multi rtsp video
libsample_rtsp_ivps_joint_rtsp_sipy.so # input video from rtsp to ivps joint output rtsp
libsample_rtsp_ivps_joint_rtsp_vo_sipy.so # input video from rtsp to ivps joint output rtsp and screen vo
libsample_rtsp_ivps_joint_vo_sipy.so # input video from rtsp to ivps joint output screen vo
更换 libxxx*.so
可参考以下示例:
pipeline.load([
'libsample_vin_ivps_joint_venc_rtsp_vo_sipy.so',
'-p', '/home/config/yolov5s.json',
'-c', '2',
])
- 以下是内置的多种 AI 模型总览:
AI 模型被内置在 /home/config
的目录下,可通过更换模型来实现不同的 AI 应用。
ax_bvc_det.json hrnet_pose_yolov8.json yolov5s_face_recognition.json
ax_person_det.json license_plate_recognition.json yolov5s_license_plate.json
ax_pose.json nanodet.json yolov6.json
ax_pose_yolov5s.json palm_hand_detection.json yolov7.json
ax_pose_yolov8.json pp_human_seg.json yolov7_face.json
crowdcount.json scrfd.json yolov7_palm_hand.json
hand_pose.json yolo_fastbody.json yolov8.json
hand_pose_yolov7_palm.json yolopv2.json yolov8_seg.json
hrnet_animal_pose.json yolov5_seg.json yolox.json
hrnet_pose.json yolov5s.json
hrnet_pose_ax_det.json yolov5s_face.json
更换 AI 模型可参考以下示例:
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/yolov5s_face.json',
'-c', '2',
])
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/yolov8_seg.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
print(i)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.drop()
{'label': 39, 'prob': 0.41857901215553284, 'objname': b'bottle', 'bbox': {'x': 0.02848125249147415, 'y': 0.03796946257352829, 'w': 0.03146517649292946, 'h': 0.15615946054458618}, 'bHasMask': 1, 'mYolov5Mask': {'w': 6, 'h': 15, 'data': b'\x00\x00\xff\xff\xff\x00\x00\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\x00\x00'}}
{'label': 39, 'prob': 0.4118087887763977, 'objname': b'bottle', 'bbox': {'x': 0.028065890073776245, 'y': 0.03647643327713013, 'w': 0.0326821468770504, 'h': 0.15858806669712067}, 'bHasMask': 1, 'mYolov5Mask': {'w': 6, 'h': 15, 'data': b'\x00\x00\xff\xff\xff\x00\x00\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\x00\x00'}}
import time
from ax import pipeline
pipeline.load([
'libsample_v4l2_user_ivps_joint_vo_sipy.so',
'-p', '/home/config/yolov8.json'
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/license_plate_recognition.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
print(i)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.drop()
{'label': 0, 'prob': 0.350676953792572, 'objname': b'anhuiCLJ3', 'bbox': {'x': 0.2686941921710968, 'y': 0.17339776456356049, 'w': 0.4595341384410858, 'h': 0.3150448203086853}, 'bHasBoxVertices': 1, 'bbox_vertices': [{'x': 0.2912384867668152, 'y': 0.17690573632717133}, {'x': 0.7315817475318909, 'y': 0.22101126611232758}, {'x': 0.708527147769928, 'y': 0.47905251383781433}, {'x': 0.2720993161201477, 'y': 0.44149187207221985}], 'nLandmark': 4, 'landmark': [{'x': 0.2720993161201477, 'y': 0.44149187207221985}, {'x': 0.708527147769928, 'y': 0.47905251383781433}, {'x': 0.7315817475318909, 'y': 0.22101126611232758}, {'x': 0.2912384867668152, 'y': 0.17690573632717133}]}
{'label': 0, 'prob': 0.2783961892127991, 'objname': b'anhuiTLJ0', 'bbox': {'x': 0.23988564312458038, 'y': 0.38015124201774597, 'w': 0.46291205286979675, 'h': 0.31740766763687134}, 'bHasBoxVertices': 1, 'bbox_vertices': [{'x': 0.26884639263153076, 'y': 0.38855984807014465}, {'x': 0.7055302858352661, 'y': 0.4226400554180145}, {'x': 0.6729471683502197, 'y': 0.6891617774963379}, {'x': 0.24066202342510223, 'y': 0.658175528049469}], 'nLandmark': 4, 'landmark': [{'x': 0.24066202342510223, 'y': 0.658175528049469}, {'x': 0.6729471683502197, 'y': 0.6891617774963379}, {'x': 0.7055302858352661, 'y': 0.4226400554180145}, {'x': 0.26884639263153076, 'y': 0.38855984807014465}]}
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/yolov5s_face_recognition.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
print(i)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.drop()
{'label': 0, 'prob': 0.5212324261665344, 'objname': b'unknown', 'bbox': {'x': 0.6881264448165894, 'y': 0.5129707455635071, 'w': 0.05985317379236221, 'h': 0.12323958426713943}, 'bHasBoxVertices': 1, 'bbox_vertices': [{'x': 0.0, 'y': 0.0}, {'x': 0.0, 'y': 0.0}, {'x': 0.0, 'y': 0.0}, {'x': 0.0, 'y': 0.0}], 'nLandmark': 5, 'landmark': [{'x': 0.7027329802513123, 'y': 0.5531390309333801}, {'x': 0.7250422835350037, 'y': 0.5570502281188965}, {'x': 0.7070087194442749, 'y': 0.57811439037323}, {'x': 0.7003915905952454, 'y': 0.6006622314453125}, {'x': 0.7167162299156189, 'y': 0.6037786602973938}]}
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/ax_bvc_det.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
x, y, w, h = i['bbox']['x'], i['bbox']['y'], i['bbox']['w'], i['bbox']['h']
objname, objprob = i['objname'], i['prob']
print(objname, objprob, x, y, w, h)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.drop()
b'vehicle' 0.9299032092094421 0.3565574288368225 0.44399410486221313 0.23071418702602386 0.2580929398536682
b'vehicle' 0.9225113391876221 0.357175350189209 0.44230249524116516 0.23054184019565582 0.2606807053089142
b'vehicle' 0.9186123609542847 0.3581112325191498 0.44336238503456116 0.22992925345897675 0.26163965463638306
b'vehicle' 0.5208129286766052 0.3618425130844116 0.4461480975151062 0.23065532743930817 0.2652992308139801
b'vehicle' 0.7194858193397522 0.3608142137527466 0.45302334427833557 0.23270295560359955 0.2703518867492676
b'vehicle' 0.8540934324264526 0.3617907166481018 0.4548843204975128 0.23152287304401398 0.27814221382141113
b'vehicle' 0.7356315851211548 0.31006965041160583 0.45962005853652954 0.23436705768108368 0.26961490511894226
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
print(i)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.drop()
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/ax_pose.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
print(i)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.drop()
{'label': 0, 'prob': 0.41659796237945557, 'objname': b'person', 'bbox': {'x': 0.01200273260474205, 'y': 0.0, 'w': 0.9315435290336609, 'h': 0.9421796798706055}, 'bHasBoxVertices': 0, 'bHasLandmark': 17, 'landmark': [{'x': 0.6708333492279053, 'y': 0.23333333432674408}, {'x': 0.6427083611488342, 'y': 0.16851851344108582}, {'x': 0.6520833373069763, 'y': 0.14629629254341125}, {'x': 0.7322916388511658, 'y': 0.5055555701255798}, {'x': 0.7614583373069763, 'y': 0.06481481343507767}, {'x': 0.7541666626930237, 'y': 0.09444444626569748}, {'x': 0.7541666626930237, 'y': 0.1518518477678299}, {'x': 0.7124999761581421, 'y': 0.15925925970077515}, {'x': 0.5041666626930237, 'y': 0.08703703433275223}, {'x': 0.6739583611488342, 'y': 0.07407407462596893}, {'x': 0.690625011920929, 'y': 0.6814814805984497}, {'x': 0.7833333611488342, 'y': 0.25}, {'x': 0.7614583373069763, 'y': 0.25}, {'x': 0.35104167461395264, 'y': 0.6074073910713196}, {'x': 0.3489583432674408, 'y': 0.5777778029441833}, {'x': 0.0572916679084301, 'y': 0.5185185074806213}, {'x': 0.0677083358168602, 'y': 0.5185185074806213}]}
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/hand_pose.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
x, y, w, h = i['bbox']['x'], i['bbox']['y'], i['bbox']['w'], i['bbox']['h']
objname, objprob = i['objname'], i['prob']
print(objname, objprob, x, y, w, h)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.drop()
{'label': 0, 'prob': 0.948456346988678, 'objname': b'hand', 'bbox': {'x': 0.26589435338974, 'y': 0.26926565170288086, 'w': 0.46994149684906006, 'h': 0.4706382751464844}, 'bHasBoxVertices': 1, 'bbox_vertices': [{'x': 1.4048067331314087, 'y': -0.42393070459365845}, {'x': 1.2827746868133545, 'y': 1.74061918258667}, {'x': 0.06521528959274292, 'y': 1.5236728191375732}, {'x': 0.18724757432937622, 'y': -0.6408770084381104}], 'bHasLandmark': 21, 'landmark': [{'x': 0.3895833194255829, 'y': 0.6722221970558167}, {'x': 0.4635416567325592, 'y': 0.5925925970077515}, {'x': 0.5979166626930237, 'y': 0.4888888895511627}, {'x': 0.6979166865348816, 'y': 0.4148148000240326}, {'x': 0.7562500238418579, 'y': 0.442592591047287}, {'x': 0.7541666626930237, 'y': 0.5388888716697693}, {'x': 0.8166666626930237, 'y': 0.4314814805984497}, {'x': 0.7927083373069763, 'y': 0.3314814865589142}, {'x': 0.768750011920929, 'y': 0.25925925374031067}, {'x': 0.746874988079071, 'y': 0.5981481671333313}, {'x': 0.778124988079071, 'y': 0.43703705072402954}, {'x': 0.7260416746139526, 'y': 0.3203703761100769}, {'x': 0.706250011920929, 'y': 0.27222222089767456}, {'x': 0.703125, 'y': 0.6499999761581421}, {'x': 0.7291666865348816, 'y': 0.4611110985279083}, {'x': 0.6666666865348816, 'y': 0.3722222149372101}, {'x': 0.628125011920929, 'y': 0.3351851999759674}, {'x': 0.6416666507720947, 'y': 0.6981481313705444}, {'x': 0.6864583492279053, 'y': 0.5814814567565918}, {'x': 0.6625000238418579, 'y': 0.5092592835426331}, {'x': 0.6447916626930237, 'y': 0.4592592716217041}]}
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/hrnet_animal_pose.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
x, y, w, h = i['bbox']['x'], i['bbox']['y'], i['bbox']['w'], i['bbox']['h']
objname, objprob = i['objname'], i['prob']
print(objname, objprob, x, y, w, h)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.drop()
b'cat' 0.4067786633968353 0.5895833373069763 0.0833333358168602 0.25 0.5703703761100769
b'cat' 0.47085827589035034 0.5666666626930237 0.09444444626569748 0.2708333432674408 0.5740740895271301
b'cat' 0.5028171539306641 0.5239583253860474 0.09814814478158951 0.32083332538604736 0.5888888835906982
b'cat' 0.4983457326889038 0.5302083492279053 0.1111111119389534 0.31041666865348816 0.5777778029441833
b'dog' 0.807929515838623 0.0833333358168602 0.1259259283542633 0.26875001192092896 0.5370370149612427
b'dog' 0.6479188799858093 0.15729166567325592 0.8425925970077515 0.12291666865348816 0.1518518477678299
import time
from ax import pipeline
from PIL import Image, ImageDraw
# ready sipeed logo canvas
lcd_width, lcd_height = 854, 480
img = Image.new('RGBA', (lcd_width, lcd_height), (255,0,0,200))
ui = ImageDraw.ImageDraw(img)
ui.rectangle((20,20,lcd_width-20,lcd_height-20), fill=(0,0,0,0), outline=(0,0,255,100), width=20)
logo = Image.open("/home/res/logo.png")
img.paste(logo, box=(lcd_width-logo.size[0], lcd_height-logo.size[1]), mask=None)
def rgba2argb(rgba):
r,g,b,a = rgba.split()
return Image.merge("RGBA", (a,b,g,r))
canvas_argb = rgba2argb(img)
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/yolov5s.json',
# '-p', '/home/config/yolov8.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
argb = canvas_argb.copy()
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
ui = ImageDraw.ImageDraw(argb)
for i in tmp['mObjects']:
x = i['bbox']['x'] * lcd_width
y = i['bbox']['y'] * lcd_height
w = i['bbox']['w'] * lcd_width
h = i['bbox']['h'] * lcd_height
objlabel = i['label']
objprob = i['prob']
ui.rectangle((x,y,x+w,y+h), fill=(100,0,0,255), outline=(255,0,0,255))
ui.text((x,y), str(objlabel))
ui.text((x,y+20), str(objprob))
pipeline.config("display", (lcd_width, lcd_height, "ARGB", argb.tobytes()))
print_data 2 False
import m3axpi
# m3axpi.camera(SysCase=0) # switch os04a10
# m3axpi.camera(SysCase=2) # default gc4653
m3axpi.load("/home/config/yolov8.json")
from PIL import Image, ImageDraw, ImageFont
lcd_width, lcd_height, lcd_channel = 854, 480, 4
fnt = ImageFont.truetype("/home/res/sans.ttf", 20)
img = Image.new('RGBA', (lcd_width, lcd_height), (255,0,0,200))
ui = ImageDraw.ImageDraw(img)
ui.rectangle((20, 20, lcd_width-20, lcd_height-20), fill=(0,0,0,0), outline=(0,0,255,100), width=20)
logo = Image.open("/home/res/logo.png")
img.paste(logo, box=(lcd_width-logo.size[0], lcd_height-logo.size[1]), mask=None)
while True:
rgba = img.copy()
tmp = m3axpi.capture()
rgb = Image.frombuffer("RGB", (tmp[1], tmp[0]), tmp[3])
rgba.paste(rgb, box=(0, 0), mask=None) ## camera 320x180 paste 854x480
res = m3axpi.forward()
if 'nObjSize' in res:
ui = ImageDraw.ImageDraw(rgba)
ui.text((0, 0), "fps:%02d" % (res['niFps']), font=fnt)
for obj in res['mObjects']:
x, y, w, h = int(obj['bbox'][0]*lcd_width), int(obj['bbox'][1]*lcd_height), int(obj['bbox'][2]*lcd_width), int(obj['bbox'][3]*lcd_height)
ui.rectangle((x,y,x+w,y+h), fill=(255,0,0,100), outline=(255,0,0,255))
ui.text((x, y), "%s:%02d" % (obj['objname'], obj['prob']*100), font=fnt)
rgba.paste(logo, box=(x+w-logo.size[1], y+h-logo.size[1]), mask=None)
m3axpi.display([lcd_height, lcd_width, lcd_channel, rgba.tobytes()])
# display(rgba)#显示到网页
!ls home/images
air.jpg carvana02.jpg face5.jpg mv2seg.png selfie.jpg
aoa-2.jpeg carvana03.jpg faces o2_resize.jpg ssd_car.jpg
aoa.jpeg carvana04.jpg grace_hopper.jpg pineapple.jpg ssd_dog.jpg
bike.jpg cat.jpg mobileface01.jpg pose-1.jpeg ssd_horse.jpg
bike2.jpg cityscape.png mobileface02.jpg pose-2.jpeg
cable.jpg dog.jpg mtcnn_face4.jpg pose-3.jpeg
carvana01.jpg efficientdet.png mtcnn_face6.jpg pose.jpg
from PIL import Image, ImageDraw
pil_im = Image.open('home/images/bike2.jpg', 'r')
draw = ImageDraw.Draw(pil_im)
draw.arc((0, 0,400,400) , start=0, end=300, fill='red',width=3)
draw.rectangle((20, 20, 200, 100), fill=(100, 20, 60), outline="#FF0000", width=3)
pil_im.show() # display(pil_im)
- 关于 Pillow 更多的使用资料请点击查看。
import numpy as np
# int8, int16, int32, int64 四种数据类型可以使用字符串 'i1', 'i2','i4','i8' 代替
dt = np.dtype('i4')
print(dt)
int32
- 关于 Numpy 的更多资料例程请点击查看。
使用 PinPong 库控制 Microbit
PinPong 库是一套控制开源硬件主控板的 Python 库,基于 Firmata 协议并兼容 MicroPython 语法。
使用前请先准备材料并按步骤接线,使用 Micro usb 数据线连接 Microbit,然后把另一端接入转接头的 USB 口,再把转接头的 Type-c 口接入设备的 OTG 口,使用 Type-c 数据线连接设备的 UART 口及 PC 端通电。
- 一个 Type-c USB 转接头
- 一台 Microbit 以及 Micro usb 数据线
- 一台 AXera-Pi 设备以及 Type-c 数据线
可直接在 Python3
环境运行下方代码即可连接 microbit 掌控版并会看到 hello world
亮灯效果。
import time
from pinpong.board import Board,Pin
from pinpong.extension.microbit import *
Board("microbit","/dev/ttyACM0").begin()
display.show(Image.HEART)
while True:
display.scroll("hello world")
__________________________________________
| ____ _ ____ |
| / __ \(_)___ / __ \____ ____ ____ _ |
| / /_/ / / __ \/ /_/ / __ \/ __ \/ __ `/ |
| / ____/ / / / / ____/ /_/ / / / / /_/ / |
|/_/ /_/_/ /_/_/ \____/_/ /_/\__, / |
| v0.4.9 Designed by DFRobot /____/ |
|__________________________________________|
[01] Python3.9.2 Linux-4.19.125-armv7l-with-glibc2.31 Board: MICROBIT
selected -> board: MICROBIT serial: /dev/ttyACM0
[10] Opening /dev/ttyACM0
[32] Firmata ID: 2.7
[22] Arduino compatible device found and connected to /dev/ttyACM0
[40] Retrieving analog map...
[42] Auto-discovery complete. Found 26 Digital Pins and 6 Analog Pins
------------------------------
All right. PinPong go...
------------------------------
- 点击 Microbit 传送门 查看更多的相关例程资料。
import time
from pinpong.board import Board,Pin
Board("uno","/dev/ttyUSB0").begin()
led = Pin(Pin.D13, Pin.OUT) #引脚初始化为电平输出
while True:
led.value(1) #输出高电平
print("1") #终端打印信息
time.sleep(1) #等待1秒 保持状态
led.value(0) #输出低电平
print("0") #终端打印信息
time.sleep(1) #等待1秒 保持状态
__________________________________________
| ____ _ ____ |
| / __ \(_)___ / __ \____ ____ ____ _ |
| / /_/ / / __ \/ /_/ / __ \/ __ \/ __ `/ |
| / ____/ / / / / ____/ /_/ / / / / /_/ / |
|/_/ /_/_/ /_/_/ \____/_/ /_/\__, / |
| v0.4.9 Designed by DFRobot /____/ |
|__________________________________________|
[01] Python3.9.2 Linux-4.19.125-armv7l-with-glibc2.31 Board: UNO
selected -> board: UNO serial: /dev/ttyUSB0
[10] Opening /dev/ttyUSB0
[32] Firmata ID: 2.7
[22] Arduino compatible device found and connected to /dev/ttyUSB0
[40] Retrieving analog map...
[42] Auto-discovery complete. Found 20 Digital Pins and 6 Analog Pins
------------------------------
All right. PinPong go...
------------------------------
1
0
1
0
1
user quit process
An exception has occurred, use %tb to see the full traceback. SystemExit: 0
- 点击 Arduino UNO 传送门 查看更多的相关例程资料。