Track Eye Pupil Position with Webcam, OpenCV, and Python -


i trying build robot can control basic eye movements. pointing webcam @ face, , depending on position of pupil, robot move way. if pupil in top, bottom, left corner, right corner of eye robot move forwards, backwards, left, right respectively.

my original plan use eye haar cascade find left eye. use houghcircle on eye region find center of pupil. determine pupil in eye finding distance center of houghcircle borders of general eye region.

so first part of code, i'm hoping able track center of eye pupil, seen in video. https://youtu.be/agmgyflqafm?t=38

but when run code, cannot consistently find center of pupil. houghcircle drawn in wrong area. how can make program consistently find center of pupil, when eye moves?

is possible/better/easier me tell program pupil @ beginning? i've looked @ other eye tracking methods, cannot form general algorithm. if form one, appreciated! https://arxiv.org/ftp/arxiv/papers/1202/1202.6517.pdf

import numpy np import cv2  face_cascade = cv2.cascadeclassifier('haarcascade_frontalface_default.xml') eye_cascade = cv2.cascadeclassifier('haarcascade_righteye_2splits.xml')  #number signifies camera cap = cv2.videocapture(0)  while 1:     ret, img = cap.read()     gray = cv2.cvtcolor(img, cv2.color_bgr2gray)     #faces = face_cascade.detectmultiscale(gray, 1.3, 5)     eyes = eye_cascade.detectmultiscale(gray)     (ex,ey,ew,eh) in eyes:         cv2.rectangle(img,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)         roi_gray2 = gray[ey:ey+eh, ex:ex+ew]         roi_color2 = img[ey:ey+eh, ex:ex+ew]         circles = cv2.houghcircles(roi_gray2,cv2.hough_gradient,1,20,param1=50,param2=30,minradius=0,maxradius=0)         try:             in circles[0,:]:                 # draw outer circle                 cv2.circle(roi_color2,(i[0],i[1]),i[2],(255,255,255),2)                 print("drawing circle")                 # draw center of circle                 cv2.circle(roi_color2,(i[0],i[1]),2,(255,255,255),3)         except exception e:             print e     cv2.imshow('img',img)     k = cv2.waitkey(30) & 0xff     if k == 27:         break  cap.release() cv2.destroyallwindows() 

i can see 2 alternatives, work did before:

  1. train haar detector detect eyeball, using training images center of pupil @ center , width of eyeball width. found better using hough circles or original eye detector of opencv (the 1 used in code).

  2. use dlib's face landmark points estimate eye region. use contrast caused white , dark regions of eyeball, contours, estimate center of pupil. produced better results.


Comments

Popular posts from this blog

ubuntu - PHP script to find files of certain extensions in a directory, returns populated array when run in browser, but empty array when run from terminal -

php - How can i create a user dashboard -

javascript - How to detect toggling of the fullscreen-toolbar in jQuery Mobile? -