基本思想: 原本想写一个YOLO5+Android的车辆检测和跟踪,但是发现无法达成实时检测效果,于是打算在tensorflow的tflite使用和探索一下;
(竟然发现一个哥们写的很YOLO5+Android版的项目,就复述一下https://github.com/zldrobit/yolov5/tree/tf-android)
一、git一下代码吧(复述)
ubuntu@ubuntu:~$git clone https://github.com/zldrobit/yolov5.git
ubuntu@ubuntu:~$ cd yolo5
ubuntu@ubuntu:~/yolo5$ git checkout tf-android
ubuntu@ubuntu:~/yolo5$ bash weights/download_weights.sh
ubuntu@ubuntu:~/yolo5$ python models/tf.py --weights weights/yolov5s.pt --cfg models/yolov5s.yaml --img 320
二、然后查看一下转换的文件及生成文件
ubuntu@ubuntu:~/yolov5/weights$ tree
.
├── download_weights.sh
├── yolov5s-fp16.tflite
├── yolov5s.pb
├── yolov5s.pt
└── yolov5s_saved_model
├── assets
├── saved_model.pb
└── variables
├── variables.data-00000-of-00001
└── variables.index
3 directories, 7 files
三、继续按照作者步骤走吧;
ubuntu@ubuntu:~/yolov5$ python3 detect.py --weight weights/yolov5s.pb --img 320
四、测试结果
第二张图 测试使用tflite检测一下效果 的识别效果较差,且速度较慢;:
ubuntu@ubuntu:~/yolov5$ python3 detect.py --weight weights/yolov5s-fp16.tflite --img 320
第二张图的小目标检测仍然很差劲 且运行时间5s左右,运行时间太慢了~~ ,虽然作者提供了yolo5+android 的代码,但是还是自己训练Tensorflow 模型部署吧~
(这里做个记录训练tensorflow,好像NanoDet也可以Android部署 等着试试)
首先写个爬虫代码,进行车辆的数据搜集吧~
参考了该博主利用Python爬取网页图片 - dearvee -的代码 做了修改
import requests
import json
import urllib
def getSogouImag(category,length,path):
n = length
cate = category
imgs = requests.get('http://pic.sogou.com/pics/channel/getAllRecomPicByTag.jsp?category='+cate+'&tag=%E5%85%A8%E9%83%A8&start=0&len='+str(n)+'&width=1920&height=1080')
jd = json.loads(imgs.text)
jd = jd['all_items']
imgs_url = []
for j in jd:
imgs_url.append(j['pic_url'])
m = 0
for img_url in imgs_url:
print('***** '+str(m)+'.jpg *****'+' Downloading...')
urllib.request.urlretrieve(img_url,path+str(m)+'.jpg')
m = m + 1
print('Download complete!')
getSogouImag('汽车',6000,'F:\\test\\')
爬取了6000张车辆的数据,然后开始使用yolo5+ncnn的代码进行自动化标注39、使用C++ 调用腾讯开源框架NCNN调用YOLOFast,并实现视频流的自动化的labelme标注json数据_sxj731533730 (注意自动化标注的时候要筛选含有车的目标)
标注完成之后,人工需要稍微调整一下,那就开始训练吧;
链接:https://pan.baidu.com/s/1aRkVpdWqN8HzHw306ivySw
提取码:80gc
{bike,bus,sedan,truck,fire engine,jeep,mini bus,motorcycle,racing,suv,taxi ,heavy truck}
复制这段内容后打开百度网盘手机App,操作更方便哦
由于标注的ijson格式需要提前转成xml格式 2、Python Labelme标注的json与LabelImg标注的xml 数文件相互转换、以及VOC数据格式转化、coco数据集转化(仅适用矩形框)_sxj731533730-
然后将数据集进行划分80% 20% (路径自己修改)
import os
import random
import time
import shutil
xmlfilepath=r'/home/ubuntu/xml'
saveBasePath=r"/home/ubuntu"
trainval_percent=0.8
train_percent=0.8
total_xml = os.listdir(xmlfilepath)
num=len(total_xml)
list=range(num)
tv=int(num*trainval_percent)
tr=int(tv*train_percent)
trainval= random.sample(list,tv)
train=random.sample(trainval,tr)
print("train and val size",tv)
print("train size",tr)
start = time.time()
test_num=0
val_num=0
train_num=0
for i in list:
if total_xml[i].endswith('xml'):
xmlname=total_xml[i]
jpgname,_=os.path.splitext(xmlname)
jpgname=jpgname+".jpg"
if i in trainval: #train and val set
if i in train:
directory="train"
train_num += 1
xml_path = os.path.join(os.getcwd(), '/home/ubuntu/{}'.format(directory))
if(not os.path.exists(xml_path)):
os.mkdir(xml_path)
filePath=os.path.join(xmlfilepath,xmlname)
newfile=os.path.join(saveBasePath,os.path.join(directory,xmlname))
shutil.copyfile(filePath, newfile)
filePath = os.path.join(xmlfilepath, jpgname)
newfile = os.path.join(saveBasePath, os.path.join(directory, jpgname))
shutil.copyfile(filePath, newfile)
else:
directory="validation"
xml_path = os.path.join(os.getcwd(), '/home/ubuntu/{}'.format(directory))
if(not os.path.exists(xml_path)):
os.mkdir(xml_path)
val_num += 1
filePath=os.path.join(xmlfilepath,xmlname)
newfile=os.path.join(saveBasePath,os.path.join(directory,xmlname))
shutil.copyfile(filePath, newfile)
filePath = os.path.join(xmlfilepath, jpgname)
newfile = os.path.join(saveBasePath, os.path.join(directory, jpgname))
shutil.copyfile(filePath, newfile)
end = time.time()
seconds=end-start
print("train total : "+str(train_num))
print("test total : "+str(test_num))
total_num=train_num+val_num+test_num
print("total number : "+str(total_num))
print( "Time taken : {0} seconds".format(seconds))
将数据集拆分训练文件夹和验证文件夹,然后配置训练环境~
首先在CUDA和cudnn、tensorflow已经安装完成的基础之上;
ubuntu@ubuntu:~/models$ python3
Python 3.6.12 |Anaconda, Inc.| (default, Sep 8 2020, 23:10:56)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
2021-02-20 20:41:33.026242: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/xiakejiang/Qt5.10.0/5.10.0/gcc_64/lib:/usr/local/cuda/lib64::/home/ps/anaconda3/envs/biyanhua_py3/mpi/:/usr/local/lib:/usr/local/cuda/lib64:/usr/local/cuda-10.0/lib64:/usr/local/lib:/usr/local/cuda/lib64:/usr/local/cuda/lib64:/usr/local/cuda/lib64
2021-02-20 20:41:33.026279: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
>>> tensorflow.__version__
'2.4.1'
>>>
预先安装几个pip库~
ubuntu@ubuntu:~$ pip install tf-models-official
ubuntu@ubuntu:~$ pip install cython
ubuntu@ubuntu:~$ pip install git+https://github.com/cocodataset/cocoapi.git
ubuntu@ubuntu:~$ wget https://github.com/google/protobuf/releases/download/v3.3.0/protoc-3.3.0-linux-x86_64.zip
ubuntu@ubuntu:~$ sudo apt-get install unzip
ubuntu@ubuntu:~$ unzip protoc-3.3.0-linux-x86_64.zip -d protoc-3.3.0-linux-x86_64
ubuntu@ubuntu:~$ sudo mv protoc-3.3.0-linux-x86_64/ /opt
ubuntu@ubuntu:~$ cd /opt/protoc-3.3.0-linux-x86_64/bin
ubuntu@ubuntu:~$ chmod +x protoc #这一步很必要,否则执行的还是2.6.1版本的
export PATH=/opt/protoc-3.3.0-linux-x86_64/bin:$PATH
ubuntu@ubuntu:~$ source ~/.bashrc
然后,下载TensorFlow的源码
ubuntu@ubuntu:~$ git clone https://github.com/tensorflow/models.git
ubuntu@ubuntu:~$ sudo vim ~/.bashrc
export PYTHONPATH=$PYTHON:/home/ubuntu/models/research/slim
ubuntu@ubuntu:~$ source ~/.bashrc
ubuntu@ubuntu:~$ cd models/research/
ubuntu@ubuntu:~/models/research/$ protoc object_detection/protos/*.proto --python_out=.
ubuntu@ubuntu:~/models/research/$ cp object_detection/packages/tf2/setup.py .
ubuntu@ubuntu:~/models/research/$ python -m pip install --use-feature=2020-resolver . -i https://pypi.tuna.tsinghua.edu.cn/simple
ubuntu@ubuntu:~/models/research/$ python object_detection/builders/model_builder_tf1_test.py
ubuntu@ubuntu:~/models$ python3
Python 3.6.12 |Anaconda, Inc.| (default, Sep 8 2020, 23:10:56)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
>>> tensorflow.__version__
'1.15.0'
>>>
测试是否安装成功
ubuntu@ubuntu:~$ python object_detection/builders/model_builder_tf1_test.py
然后就是成功的表现
.......
[ OK ] ModelBuilderTF2Test.test_unknown_ssd_feature_extractor
----------------------------------------------------------------------
Ran 20 tests in 68.510s
OK (skipped=1)
继续在jpg&xml基础上对数据集进行处理 参考raccoon_dataset/xml_to_csv.py at master · datitran/raccoon_dataset · GitHub 将数据集xml转成csv (做了修改 源代码 存在问题)
import os
import glob
import pandas as pd
import xml.etree.ElementTree as ET
def xml_to_csv(path):
xml_list = []
for xml_file in glob.glob(path + '*.xml'):
tree = ET.parse(xml_file)
root = tree.getroot()
for member in root.findall('object'):
value = (root.find('filename').text,
int(root.find('size')[0].text),
int(root.find('size')[1].text),
member[0].text,
int(eval(member[4][0].text)),
int(eval(member[4][1].text)),
int(eval(member[4][2].text)),
int(eval(member[4][3].text))
)
xml_list.append(value)
column_name = ['filename', 'width', 'height', 'class', 'xmin', 'ymin', 'xmax', 'ymax']
xml_df = pd.DataFrame(xml_list, columns=column_name)
return xml_df
def main():
xml_path = '/home/ubuntu/train/' # 存储xml的文件夹
xml_df = xml_to_csv(xml_path)
xml_df.to_csv('/home/ubuntu/train/class.csv', index=None) # 生成csv文件并存储在该路径下
print('Successfully converted xml to csv.')
xml_path = '/home/ubuntu/validation/' # 存储xml的文件夹
xml_df = xml_to_csv(xml_path)
xml_df.to_csv('/home/ubuntu/validation/class.csv', index=None) # 生成csv文件并存储在该路径下
print('Successfully converted xml to csv.')
main()
结果生成对应的训练和验证的csv结果截取为:
然后进行转化代码的抄袭和修改
"""
# From tensorflow/models/
# Create train data:
python generate_tfrecord.py --csv_input=data/train_labels.csv --output_path=train.record
# Create test data:
python generate_tfrecord.py --csv_input=data/test_labels.csv --output_path=test.record
"""
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
import os
import io
import pandas as pd
import tensorflow.compat.v1 as tf
from PIL import Image
from object_detection.utils import dataset_util
from collections import namedtuple, OrderedDict
flags = tf.app.flags
flags.DEFINE_string('csv_input', '', 'Path to the CSV input')
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
flags.DEFINE_string('image_dir', '', 'Path to images')
FLAGS = flags.FLAGS
# TO-DO replace this with label map
def class_text_to_int(row_label):
print(row_label)
if row_label == 'bike':
return 1
elif row_label=='motorcycle':
return 2
elif row_label=='mini bus':
return 3
elif row_label=='jeep':
return 4
elif row_label=='taxi':
return 5
elif row_label=='sedan':
return 6
elif row_label=='racing':
return 7
elif row_label=='fire engine':
return 8
elif row_label=='bus':
return 9
elif row_label=='heavy truck':
return 10
elif row_label=='suv':
return 11
elif row_label=='truck':
return 12
else:
None
def split(df, group):
data = namedtuple('data', ['filename', 'object'])
gb = df.groupby(group)
return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)]
def create_tf_example(group, path):
with tf.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid:
encoded_jpg = fid.read()
encoded_jpg_io = io.BytesIO(encoded_jpg)
image = Image.open(encoded_jpg_io)
width, height = image.size
filename = group.filename.encode('utf8')
image_format = b'jpg'
xmins = []
xmaxs = []
ymins = []
ymaxs = []
classes_text = []
classes = []
for index, row in group.object.iterrows():
xmins.append(row['xmin'] / width)
xmaxs.append(row['xmax'] / width)
ymins.append(row['ymin'] / height)
ymaxs.append(row['ymax'] / height)
classes_text.append(row['class'].encode('utf8'))
classes.append(class_text_to_int(row['class']))
tf_example = tf.train.Example(features=tf.train.Features(feature={
'image/height': dataset_util.int64_feature(height),
'image/width': dataset_util.int64_feature(width),
'image/filename': dataset_util.bytes_feature(filename),
'image/source_id': dataset_util.bytes_feature(filename),
'image/encoded': dataset_util.bytes_feature(encoded_jpg),
'image/format': dataset_util.bytes_feature(image_format),
'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
'image/object/class/label': dataset_util.int64_list_feature(classes),
}))
return tf_example
def main(_):
writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
path = os.path.join(FLAGS.image_dir)
examples = pd.read_csv(FLAGS.csv_input)
grouped = split(examples, 'filename')
for group in grouped:
tf_example = create_tf_example(group, path)
writer.write(tf_example.SerializeToString())
writer.close()
output_path = os.path.join(os.getcwd(), FLAGS.output_path)
print('Successfully created the TFRecords: {}'.format(output_path))
if __name__ == '__main__':
tf.app.run()
命令执行
ubuntu@ubuntu:~$ python3 generate_tfrecord.py --csv_input=./train/class.csv --output_path=./train.record --image_dir=./train/
ubuntu@ubuntu:~$ python3 generate_tfrecord.py --csv_input=./validation/class.csv --output_path=./validation.record --image_dir=./validation/
然后进行record文件生成~
-rw-r--r-- 1 root root 3928 2月 23 15:35 generate_tfrecord.py
drwx------ 2 ps ps 4096 2月 22 22:32 mobilenet
-rw-rw-r-- 1 root ps 78306834 2月 22 22:01 mobilenet_v2_1.0_224.tgz
drwxrwxr-x 9 root ps 4096 2月 23 12:52 models
-rw-rw-r-- 1 ps ps 2320 2月 23 15:33 split.py
drwxrwxr-x 2 ps ps 200704 2月 23 15:24 train
-rw-rw-r-- 1 ps ps 568895266 2月 23 15:36 train.record
drwxrwxr-x 2 ps ps 40960 2月 23 15:24 validation
-rw-rw-r-- 1 ps ps 139858764 2月 23 15:36 validation.record
drwxrwxr-x 4 root ps 241664 2月 23 13:43 xml
-rw-rw-r-- 1 ps ps 1464 2月 23 15:34 xml_to_csv.py
然后修改训练模型
ubuntu@ubuntu:~/models/research/object_detection/samples/configs$ pwd
/home/ubutnu/models/research/object_detection/samples/configs
然后创建文件 把训练tfrecord和配置文件(https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md)移动进去,然后
ubuntu@ubuntu:~/models/research/object_detection/samples/configs/20210223$(YOLO5) ps@ps-Super-Server:~/TESTINT8YOL5/models/research/object_detection/samples/configs/20210223$ cp ../ssdlite_mobilenet_v3_small_320x320_coco.config .
ssdlite_mobilenet_v3_small_320x320_coco.config
# SSDLite with Mobilenet v3 small feature extractor.
# Trained on COCO14, initialized from scratch.
# TPU-compatible.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval(232, 232, 232); background: rgb(249, 249, 249);">item {
id: 1 # id 从1开始编号
name: 'bike'
}
item {
id: 2
name: 'motorcycle'
}
item {
id: 3
name: 'mini bus'
}
item {
id: 4
name: 'jeep'
}
item {
id: 5
name: 'taxi'
}
item {
id: 6
name: 'sedan'
}
item {
id: 7
name: 'racing'
}
item {
id: 8
name: 'fire engine'
}
item {
id: 9
name: 'bus'
}
item {
id: 10
name: 'heavy truck'
}
item {
id: 11
name: 'suv'
}
item {
id: 12
name: 'truck'
}然后创建一个存储训练模型的文件夹 开始训练吧
ubuntu@ubuntu :~/models/research/object_detection/samples/configs/20210223$ tree
.
├── mscoco_label_map.pbtxt
├── saved
├── ssdlite_mobilenet_v3_small_320x320_coco.config
└── train.record
└── log
1 directory, 3 files训练代码(注意 存在一个问题ModuleNotFoundError: No module named 'pycocotools
参考这个博客 【Tensorflow】SSD_Mobilenet_v2实现目标检测(一):环境配置+训练_摇曳的树的博客-
git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
make
make install
python setup.py install
cp -r ./cocoapi/PythonAPI/pycocotools ./models-master/research/ # 复制pycocotools到models-master/research/然后开始训练代码(注)
训练前修改一下主程序,主要目的为了分配GPU的显存
ubuntu@ubuntu :~/models$ sudo vim research/object_detection/model_main.py
import os
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ['CUDA_VISIBLE_DEVICES'] = "2,3" #选择哪一块gpu
config = ConfigProto()
config.allow_soft_placement=True #如果你指定的设备不存在,允许TF自动分配设备
config.gpu_options.per_process_gpu_memory_fraction=0.7 #分配百分之七十的显存给程序使用,避免内存溢出,可以自己调整
config.gpu_options.allow_growth = True #按需分配显存,这个比较重要
session = InteractiveSession(config=config)然后训练
ubuntu@ubuntu:~/models$ CUDA_VISIBLE_DEVICES=2,3 nohup python3 research/object_detection/model_main.py --logtostderr --model_dir=/home/ubuntu/models/research/object_detection/samples/configs/20210223/saved --pipeline_config_path=/home/ubuntu/models/research/object_detection/samples/configs/20210223/ssdlite_mobilenet_v3_small_320x320_coco.config models --num_train_steps=1000000 --keep_checkpoint_max=50 --save_checkpoints_steps=1000 --sample_1_of_n_eval(232, 232, 232); background: rgb(249, 249, 249);">watch -n 1 nvidia-smi
查看训练结果
ubuntu@ubuntu:~/models/research/object_detection/samples/configs/20210223$ tensorboard --logdir=saved --port 8900然后在网页上输入
http://192.168.*.**:8900
然后生成模型pb
ubuntu@ubuntu:~/models$ CUDA_VISIBLE_DEVICES=3 python research/object_detection/export_inference_graph.py --pipeline_config_path=research/object_detection/samples/configs/20210223/ssdlite_mobilenet_v3_small_320x320_coco.config --trained_checkpoint_prefix=research/object_detection/samples/configs/20210223/saved/model.ckpt-1000000 --output_directory=research/object_detection/samples/configs/20210223/saved/文件夹内容
ubuntu@ubuntu:~/models/research/object_detection/samples/configs/20210223/saved$ ls
checkpoint model.ckpt-972810.data-00000-of-00001
eval(232, 232, 232); background: rgb(249, 249, 249);">import tensorflow as tf
import cv2
import numpy as np
def graph_create(graphpath):
with tf.gfile.FastGFile(graphpath, 'rb') as graphfile:
graphdef = tf.GraphDef()
graphdef.ParseFromString(graphfile.read())
return tf.import_graph_def(graphdef, name='', return_elements=[
'image_tensor:0', 'detection_boxes:0', 'detection_scores:0', 'detection_classes:0'])
nameList=["bike","bus","sedan","truck","fire engine","jeep","mini bus","motorcycle","racing","suv","taxi","heavy truck"]
image_tensor, box, score, cls = graph_create("G:\\saved\\frozen_inference_graph.pb")
image_file = "F:\\temp\\1.jpg"
with tf.Session() as sess:
image = cv2.imread(image_file)
image_data = np.expand_dims(image, axis=0).astype(np.uint8)
b, s, c = sess.run([box, score, cls], {image_tensor: image_data})
boxes = b[0]
conf = s[0]
clses = c[0]
# writer = tf.summary.FileWriter('debug', sess.graph)
for i in range(8):
bx = boxes[i]
print(boxes[i])
if conf[i] < 0.55:
continue
h = image.shape[0]
w = image.shape[1]
p1 = (int(w * bx[1]), int(h * bx[0]))
p2 = (int(w * bx[3]), int(h * bx[2]))
cv2.putText(image, nameList[int(clses[i])], p2, cv2.FONT_HERSHEY_SIMPLEX,0.7, (255, 255, 255), 1, cv2.LINE_AA)
cv2.rectangle(image, p1, p2, (0, 255, 0))
print(clses[i])
cv2.imshow("mobilenet-ssd", image)
cv2.waitKey(0)检测结果
将模型转到tflite
ubuntu@ubuntu :~/models$ python research/object_detection/export_tflite_ssd_graph.py --pipeline_config_path=research/object_detection/samples/configs/20210223/ssdlite_mobilenet_v3_small_320x320_coco.config --trained_checkpoint_prefix=research/object_detection/samples/configs/20210223/saved/model.ckpt-1000000 --output_directory=research/object_detection/samples/configs/20210223/saved/tflite --max_detections=100 --add_postprocessing_op=true
ubuntu@ubuntu~: pip install tf-nightly
ubuntu@ubuntu~: tflite_convert --output_file=research/object_detection/samples/configs/20210223/saved/tflite/tflite_graph.tflite --graph_def_file=research/object_detection/samples/configs/20210223/saved/tflite/tflite_graph.pb --output_format=TFLITE --input_shape=1,320,320,3 --input_arrays="normalized_input_image_tensor" --output_arrays="TFLite_Detection_PostProcess","TFLite_Detection_PostProcess:1","TFLite_Detection_PostProcess:2","TFLite_Detection_PostProcess:3" --inference_input_type=FLOAT --inference_type=FLOAT --std_dev_values=128 --mean_values=128 --change_concat_input_ranges=false --default_ranges_min=0 --max_detections=100 --default_ranges_max=6 --allow_custom_ops其中输入input_shape=1,320,320,3
ubuntu@ubutnu:~/models/research/object_detection/samples/configs/20210223/saved/tflite$ ls
tflite_graph.pb tflite_graph.pbtxt tflite_graph.tflite到此为止就生成了python 调用的pb模型和Android手机调用的tflite模型
使用python脚本转也可以
import tensorflow as tf
# 需要配置
in_path = r"ubuntu\home\saved\tflite\tflite_graph.pb"
# 模型输入节点
input_tensor_name = ["normalized_input_image_tensor"]
input_tensor_shape = {"normalized_input_image_tensor":[1,320,320,3]}
# 模型输出节点
classes_tensor_name = ["TFLite_Detection_PostProcess","TFLite_Detection_PostProcess:1","TFLite_Detection_PostProcess:2","TFLite_Detection_PostProcess:3" ]
converter = tf.lite.TFLiteConverter.from_frozen_graph(in_path,
input_tensor_name, classes_tensor_name,
input_tensor_shape)
converter.allow_custom_ops=True
converter.post_training_quantize = True
tflite_model = converter.convert()
open("output_detect.tflite", "wb").write(tflite_model)
print("done")模型压缩的话
ubuntu@ubuntu:~/models$ tflite_convert --output_file=research/object_detection/samples/configs/20210223/saved/tflite/tflite_graph.tflite --graph_def_file=research/object_detection/samples/configs/20210223/saved/tflite/tflite_graph.pb --output_format=TFLITE --input_shape=1,320,320,3 --input_arrays="normalized_input_image_tensor" --output_arrays="TFLite_Detection_PostProcess","TFLite_Detection_PostProcess:1","TFLite_Detection_PostProcess:2","TFLite_Detection_PostProcess:3" --inference_input_type=FLOAT --inference_type=FLOAT --std_dev_values=128 --mean_values=128 --change_concat_input_ranges=false --default_ranges_min=0 --max_detections=100 --default_ranges_max=6 --allow_custom_ops --post_training_quantize = Trueubuntu@ubuntu:~/models/research/object_detection/samples/configs/20210223/saved/tflite$ ls -l
总用量 21256
-rw-rw-r-- 1 ps ps 4535842 2月 26 23:24 tflite_graph.pb
-rw-rw-r-- 1 ps ps 13001435 2月 26 23:24 tflite_graph.pbtxt
-rw-rw-r-- 1 ps ps 4220372 3月 1 18:45 tflite_graph.tflite
ubuntu@ubuntu:~/models/research/object_detection/samples/configs/20210223/saved/tflite$ ls
tflite_graph.pb tflite_graph.pbtxt tflite_graph.tflite # 上面为量化前 下面为量化后
ubuntu@ubuntu:~/models/research/object_detection/samples/configs/20210223/saved/tflite$ ls -l
总用量 18284
-rw-rw-r-- 1 ps ps 4535842 2月 26 23:24 tflite_graph.pb
-rw-rw-r-- 1 ps ps 13001435 2月 26 23:24 tflite_graph.pbtxt
-rw-rw-r-- 1 ps ps 1177376 3月 1 18:45 tflite_graph.tflite使用python代码先测试一下 测试tflite代码来自这个大佬 git clone https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi.git
# Import packages
import os
import argparse
import cv2
import numpy as np
import sys
import glob
import importlib.util
# Define and parse input arguments
parser = argparse.ArgumentParser()
parser.add_argument('--modeldir',help='Folder the .tflite file is located in',
required=True)
parser.add_argument('--graph', help='Name of the .tflite file, if different than detect.tflite',
default='tflite_graph.tflite')
parser.add_argument('--labels', help='Name of the labelmap file, if different than labelmap.txt',
default='labelmap.txt')
parser.add_argument('--threshold', help='Minimum confidence threshold for displaying detected objects',
default=0.5)
parser.add_argument('--image', help='Name of the single image to perform detection on. To run detection on multiple images, use --imagedir',
default=None)
parser.add_argument('--imagedir', help='Name of the folder containing images to perform detection on. Folder must contain only images.',
default=None)
parser.add_argument('--edgetpu', help='Use Coral Edge TPU Accelerator to speed up detection',
action='store_true')
args = parser.parse_args()
MODEL_NAME = args.modeldir
GRAPH_NAME = args.graph
LABELMAP_NAME = args.labels
min_conf_threshold = float(args.threshold)
use_TPU = args.edgetpu
# Parse input image name and directory.
IM_NAME = args.image
IM_DIR = args.imagedir
# If both an image AND a folder are specified, throw an error
if (IM_NAME and IM_DIR):
print('Error! Please only use the --image argument or the --imagedir argument, not both. Issue "python TFLite_detection_image.py -h" for help.')
sys.exit()
# If neither an image or a folder are specified, default to using 'test1.jpg' for image name
if (not IM_NAME and not IM_DIR):
IM_NAME = '1.jpg'
# Import TensorFlow libraries
# If tflite_runtime is installed, import interpreter from tflite_runtime, else import from regular tensorflow
# If using Coral Edge TPU, import the load_delegate library
pkg = importlib.util.find_spec('tflite_runtime')
if pkg:
from tflite_runtime.interpreter import Interpreter
if use_TPU:
from tflite_runtime.interpreter import load_delegate
else:
from tensorflow.lite.python.interpreter import Interpreter
if use_TPU:
from tensorflow.lite.python.interpreter import load_delegate
# If using Edge TPU, assign filename for Edge TPU model
if use_TPU:
# If user has specified the name of the .tflite file, use that name, otherwise use default 'edgetpu.tflite'
if (GRAPH_NAME == 'detect.tflite'):
GRAPH_NAME = 'edgetpu.tflite'
# Get path to current working directory
CWD_PATH = os.getcwd()
# Define path to images and grab all image filenames
if IM_DIR:
PATH_TO_IMAGES = os.path.join(CWD_PATH,IM_DIR)
images = glob.glob(PATH_TO_IMAGES + '/*')
elif IM_NAME:
PATH_TO_IMAGES = os.path.join(CWD_PATH,IM_NAME)
images = glob.glob(PATH_TO_IMAGES)
# Path to .tflite file, which contains the model that is used for object detection
PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,GRAPH_NAME)
# Path to label map file
PATH_TO_LABELS = os.path.join(CWD_PATH,MODEL_NAME,LABELMAP_NAME)
# Load the label map
with open(PATH_TO_LABELS, 'r') as f:
labels = [line.strip() for line in f.readlines()]
# Have to do a weird fix for label map if using the COCO "starter model" from
# https://www.tensorflow.org/lite/models/object_detection/overview
# First label is '???', which has to be removed.
if labels[0] == '???':
del(labels[0])
# Load the Tensorflow Lite model.
# If using Edge TPU, use special load_delegate argument
if use_TPU:
interpreter = Interpreter(model_path=PATH_TO_CKPT,
experimental_delegates=[load_delegate('libedgetpu.so.1.0')])
print(PATH_TO_CKPT)
else:
interpreter = Interpreter(model_path=PATH_TO_CKPT)
interpreter.allocate_tensors()
# Get model details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
height = input_details[0]['shape'][1]
width = input_details[0]['shape'][2]
floating_model = (input_details[0]['dtype'] == np.float32)
input_mean = 127.5
input_std = 127.5
# Loop over every image and perform detection
for image_path in images:
# Load image and resize to expected shape [1xHxWx3]
image = cv2.imread(image_path)
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
imH, imW, _ = image.shape
image_resized = cv2.resize(image_rgb, (width, height))
input_data = np.expand_dims(image_resized, axis=0)
# Normalize pixel values if using a floating model (i.e. if model is non-quantized)
if floating_model:
input_data = (np.float32(input_data) - input_mean) / input_std
# Perform the actual detection by running the model with the image as input
interpreter.set_tensor(input_details[0]['index'],input_data)
interpreter.invoke()
# Retrieve detection results
boxes = interpreter.get_tensor(output_details[0]['index'])[0] # Bounding box coordinates of detected objects
classes = interpreter.get_tensor(output_details[1]['index'])[0] # Class index of detected objects
scores = interpreter.get_tensor(output_details[2]['index'])[0] # Confidence of detected objects
#num = interpreter.get_tensor(output_details[3]['index'])[0] # Total number of detected objects (inaccurate and not needed)
# Loop over all detections and draw detection box if confidence is above minimum threshold
for i in range(len(scores)):
if ((scores[i] > min_conf_threshold) and (scores[i] <= 1.0)):
# Get bounding box coordinates and draw box
# Interpreter can return coordinates that are outside of image dimensions, need to force them to be within image using max() and min()
ymin = int(max(1,(boxes[i][0] * imH)))
xmin = int(max(1,(boxes[i][1] * imW)))
ymax = int(min(imH,(boxes[i][2] * imH)))
xmax = int(min(imW,(boxes[i][3] * imW)))
cv2.rectangle(image, (xmin,ymin), (xmax,ymax), (10, 255, 0), 2)
# Draw label
object_name = labels[int(classes[i])] # Look up object name from "labels" array using class index
label = '%s: %d%%' % (object_name, int(scores[i]*100)) # Example: 'person: 72%'
labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2) # Get font size
label_ymin = max(ymin, labelSize[1] + 10) # Make sure not to draw label too close to top of window
cv2.rectangle(image, (xmin, label_ymin-labelSize[1]-10), (xmin+labelSize[0], label_ymin+baseLine-10), (255, 255, 255), cv2.FILLED) # Draw white box to put label text in
cv2.putText(image, label, (xmin, label_ymin-7), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2) # Draw label text
# All the results have been drawn on the image, now display the image
cv2.imshow('Object detector', image)
# Press any key to continue to next image, or press 'q' to quit
if cv2.waitKey(0) == ord('q'):
break
# Clean up
cv2.destroyAllWindows()测试命令
F:\TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi\TFLite_detection_image.py --modeldir G:\\saved\\tflite --labels G:\\saved\\tflite\\labelmap.txt测试结果
labelmap.txtb内容
bike
bus
sedan
truck
fire engine
jeep
mini bus
motorcycle
racing
suv
tax
heavy truck链接:https://pan.baidu.com/s/1sFCLGN9uV3PgGqRbVqrGdQ
提取码:9wck
复制这段内容后打开百度网盘手机App,操作更方便哦测试手机Android 效果如下
:
项目地址:https://github.com/sxj731533730/TfLiteDetection