An ArUco marker is a sort of QR code. It’s composed by a wide black border and an inner binary matrix which determines its identifier (id). With the ArUco module, the camera can detect these markers and give you information.
The first thing you need to do is to generate an ArUco marker. You can do this by running the following code:
import cv2
import numpy as np
import matplotlib.pyplot as plt
aruco_dict = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_6X6_250)
id = 1 # This is the identifier of the marker, there are 250 in the dictionary we’re using
img_size = 250 # Define the size of the final image in pixels
marker_img = cv2.aruco.generateImageMarker(aruco_dict, id, img_size)
plt.imshow(marker_img, cmap='gray', interpolation='nearest') # Plot the marker
plt.show()
cv2.imwrite('aruco{}.png'.format(id), marker_img) # Save the marker
After you saved this, you need to print the marker on a piece of paper. Now we can start calibrating.
Place the marker in the middle of your camera and place your robot directly on it in the middle. Write down the x and y value of your robot coordinates. Take the robot arm out of the camera view and take a picture of the marker (I named it “img”, if you used a different name remember to also change it in the code). It is important that the marker is in the exact same position as when you wrote down the robot coordinates.
Run the following code below where ‘marker_actual_size_mm’ is the size of your printed marker. So you need to measure this.
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Convert the image to gray
marker_actual_size_mm = 66 # The size of your printed marker
aruco_dict = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_6X6_250)
parameters = cv2.aruco.DetectorParameters()
detector = cv2.aruco.ArucoDetector(aruco_dict, parameters)
corners, ids, rejected = detector.detectMarkers(gray) # Vertices and id of the marker
if ids is not None:
cv2.aruco.drawDetectedMarkers(img, corners, ids)
marker_corners = corners[0][0]
top_left, top_right, bottom_right, bottom_left = marker_corners
pixel_width = np.linalg.norm(top_right - top_left)
pixel_to_mm_ratio = marker_actual_size_mm / pixel_width
print(f"Pixel-to-mm conversion factor: {pixel_to_mm_ratio} mm per pixel")
center_x = int(np.mean(marker_corners[:, 0]))
center_y = int(np.mean(marker_corners[:, 1]))
center = (center_x, center_y)
cv2.circle(img, center, 10, (0, 255, 0), -1)
print(f"Middelpunt van de marker: X={center_x}, Y={center_y}")
plt.imshow(img)
plt.title("Afbeelding met gedetecteerde aruco marker")
plt.show()
else:
print("Geen ArUco marker gedetecteerd.")
If done correctly, the plotted image should have a green line around the marker and a green dot in the middle of the marker. It should have detected the center coordinates of the marker in pixels and the pixel to mm conversion.
Now you need to check in what orientation your robot arm and camera are. Here are the positive x-axis of the camera in the same direction as the positive x-axis of the robot. And the positive y-axis of the camera in the same direction as the negative y-axis of the robot. So:
x_cam = x_robot + offsetx
y_cam = - y_robot + offsety
If your axis are aligned differently, you just rewrite the equation. So if your x and y-axis are swapped, you just say for example:
x_cam = y_robot + offsetx
y_cam = x_robot + offsety
The offset is easily determined. We have the coordinates of a point from the camera and from the robot.
Since the robot coordinates are in mm you first have to use the pixel to mm conversion you just calculated to convert the camera coordinates, now in pixels, to coordinates in mm. Then you just rewrite the formula to get the ofsets:
offsetx = x_cam - x_robot
offsety = y_cam + y_robot
Now you calibrated your camera and robot! So when you pass on the coordinates from your camera to your robot, you only need to add the offsets and then your robot will know where to go!