Convert Yolov3 Output To Coordinates Of Bounding Box, Label And Confidence
I run YoloV3 model and get detections - dictionary of 3 entries: 'detector/yolo-v3/Conv_22/BiasAdd/YoloRegion' : numpy.ndarray with shape (1,255,52,52), 'detector/yolo-v3/Conv_6/B
Solution 1:
Presuming you use python and opencv,
Pelase find the below code with comments where ever required, to extract the output using cv2.dnn module.
net.setInput(blob)
layerOutputs = net.forward(ln)
boxes = []
confidences = []
classIDs = []
for output in layerOutputs:
# loop over each of the detectionsfor detection in output:
# extract the class ID and confidence (i.e., probability) of# the current object detection
scores = detection[5:]
classID = np.argmax(scores)
confidence = scores[classID]
# filter out weak predictions by ensuring the detected# probability is greater than the minimum probabilityif confidence > threshold:
# scale the bounding box coordinates back relative to the# size of the image, keeping in mind that YOLO actually# returns the center (x, y)-coordinates of the bounding# box followed by the boxes' width and height
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")
# use the center (x, y)-coordinates to derive the top and# and left corner of the bounding box
x = int(centerX - (width / 2))
y = int(centerY - (height / 2))
# update our list of bounding box coordinates, confidences,# and class IDs
boxes.append([x, y, int(width), int(height)])
confidences.append(float(confidence))
classIDs.append(classID)
idxs = cv2.dnn.NMSBoxes(boxes, confidences, confidence, threshold)
#results are stored in idxs,boxes,confidences,classIDs
Post a Comment for "Convert Yolov3 Output To Coordinates Of Bounding Box, Label And Confidence"