Ideageneration from ChatGPT

To detect independent movement of a transparent layer and a solid layer beneath it,

along with dots on top, you can employ several techniques depending on the specifics of your setup and requirements. 
Here's a general approach: 1. Optical Flow Sensor Selection: Ensure that the optical flow sensor you're using is capable of detecting
motion through transparent materials. Some sensors might struggle with this due to reflections or
refractions caused by the transparency. 2. Image Processing: Use image processing techniques to analyze the output from the optical flow sensor.
This involves extracting features, such as the dots on the layers, and tracking their movement over time. 3. Background Subtraction: Implement background subtraction techniques to separate the movement
of the transparent layer and the solid layer beneath it. This helps in isolating the motion of each
layer independently. 4. Feature Tracking: Track the movement of specific features (like dots) on both layers.
This can be done using methods such as Lucas-Kanade optical flow or feature tracking algorithms like SIFT or SURF. 5. Coordinate Transformation: Since the layers are stacked on top of each other,
you might need to transform the coordinates of the detected motion on the transparent
layer to match the coordinates on the solid layer. 6. Motion Analysis: Analyze the tracked motion to identify independent movements of each layer.
This could involve comparing velocities, directions, or other characteristics of the detected motion. 7. Thresholding and Filtering: Apply appropriate thresholds and filters to eliminate noise
and false detections in the motion data. 8. Integration and Validation: Integrate the above steps into a coherent system and validate the
results through experimentation and testing with different scenarios. Remember to fine-tune parameters and algorithms based on the specific characteristics of your setup
and the behavior you're trying to detect. Additionally, consider the computational resources
available and the real-time requirements of your application.


Testing transparency of plexi-layer through PAA5100JE

Tested the flow sensor to see if it could detect movement when placed more than 35mm from the skin layer → This would mean that it is not able to see through the transparent layer.

This test was at first not successful due to lots of shadows being detected by the sensor when placed in the setup. This caused it to detect movement when placed up to 7cm from the transparent layer. The sensor is supposed to only capture movement from 15-35mm from the sensor. 

The second test was done with the setup below, decreasing the chance of shadows interfering. This test did not result in the sensor detecting movement, and was therefore successful because no 2D movement was detected by the code given in Optical Flow Sensor - PAA5100JE for detecting deltaX and deltaY.

The transparency was more accurately measured in the second try, due to the sensitivity of optical flow to shadows, as described by the need for Lambertian surface reflection constraint (Barron 1995).

Grids on plexi

The 1mm2 grid posed challenges due to heating during cutting, which resulted in defects as shown below such as curving of the grid area, melting of grids and splitting. 

Using chessboard functions and code from openCV page

I coloured the tiles in the 3mm lasercut plexi to mimic a chessboard, and changed the code to detect corners (intersections) on the board. The mask shows promise, but i suspect that the cut lines interfere so there is no connection between the different tiles, and the code does not see it as a corner (as seen below).  Will therefore try to change how I draw on the grid onto the plexi.

IMG-5390.mp4


TrackingGridVideo
import imutils
import datetime
import time
import cv2 as cv
import numpy as np

from imutils.video import VideoStream
from collections import deque

#Defining arguments to locate and play files
ap = argparse.ArgumentParser()
ap.add_argument("-v","--video",help="Path to the video file")
ap.add_argument("-b","--buffer",type=int,default=32,help="Max buffer size")
args = vars(ap.parse_args())

green_lower = np.array([0,0,0], np.uint8)
green_upper = np.array([160, 255, 90], np.uint8)

#Defining variables for the locations shown on frame
counter = 0
pts = deque(maxlen=args["buffer"])
(dx,dy) = (0,0)

#Reading files or use webcam to capture
if not args.get("video",False):
    v = imutils.video.VideoStream(src=0).start()
    frame_width = int(v.stream.get(cv.CAP_PROP_FRAME_WIDTH))
    frame_height = int(v.stream.get(cv.CAP_PROP_FRAME_HEIGHT))
else:
    v = cv.VideoCapture(args["video"])
    frame_width = int(v.get(cv.CAP_PROP_FRAME_WIDTH))
    frame_height = int(v.get(cv.CAP_PROP_FRAME_HEIGHT))

time.sleep(2.0)
index=0

#While loop to go through frames and track
while True:
    frame = v.read()
    if args.get("video", False):
        frame = frame[1]
    else:
        frame = frame

    if frame is None:
        break

    #Working the frames to be able to locate and track dot
    frame = imutils.resize(frame,width=600)
    hsv = cv.cvtColor(frame,cv.COLOR_BGR2HSV)
    gray = cv.cvtColor(frame,cv.COLOR_BGR2GRAY)

    mask = cv.inRange(hsv,green_lower,green_upper)

    found,corners = cv.findChessboardCorners(gray,(9,9))

    if found:
    # Reshape corners to have two elements per corner
        corners = corners.reshape(-1, 2)
    
        points = cv.drawChessboardCorners(mask,(9,9),corners,found)

        for i in range(len(corners)):
            pts.append(corners[i])
    
        #Setting counter and difference in locations between frames
        for j in np.arange(1,len(pts)):
            if pts[i-1] is None or pts[i] is None:
                continue
            if counter >= 10 and i==1 and pts[-10] is not None:
                dx = pts[-10][0] - pts[i][0]
                dy = pts[-10][1] - pts[i][1]
            
            thickness = int(np.sqrt(args["buffer"]/float(i+1))*2.5)
            cv.line(frame,pts[i-1],pts[i],(0,255,0),thickness)

        #Putting text on frame to show time, and movement from last frame
        cv.putText(frame,"dx:{},dy:{}".format(dx,dy),(10,frame.shape[0]-10),cv.FONT_HERSHEY_TRIPLEX,0.35,(0,255,0),1)
        cv.putText(frame,datetime.datetime.now().strftime("%A %d %B %Y %H:%M:%S%p"),(10,30),cv.FONT_HERSHEY_TRIPLEX,0.35,(0,255,0),1)

        #Saving frames to folder
        name = './VideoSave2/frame' + str(index) + '.jpg'
        cv.imwrite(name, frame)
        index+=1

        #Show frames and give option to exit
        cv.imshow("Frame",frame)

        key = cv.waitKey(1) & 0xFF 
        counter += 1

        if key == ord("d"):
            break
    else:
        cv.imshow("Frame",mask)

        key = cv.waitKey(1) & 0xFF 
        counter += 1

        if key == ord("d"):
            break

        print("Chessboard corners not found.")
    
    
if not args.get("video",False):
    v.stop()

else:
    v.release()

cv.destroyAllWindows()

This codes gives the output "Chessboard corners not found" for the time being, and shows me the frame with mask.

Using Harris instead of chessboard

More on that here.




  • No labels