The leJOS 0.9.1 release added the OpenCV vision processing library. What better way to test it than build a face-detecting robot! First step is to figure out how to use the OpenCV libraries on EV3 to detect a face in an image.

It turns out this is really hard to do in practice. Lighting is one of the problems; any strong reflections or highlights in the image and OpenCV will fail to detect the face. The other issue is the processing speed of the EV3; face detection is a CPU-heavy task! The EV3 has a fast processor, but it struggles to do reliable face detection in almost real-time to control a robot. 

Let’s start with a simple example, using the webcam capture code from as a basis to build on. I’m going to assume that you:

How does face-detection work?

First of all, this code will detect a face-shaped object in an image stream, but it won’t distinguish who it is. That is termed face recognition and is a lot harder to do (trust me – I built these systems for a living!) How does OpenCV detect a face? It takes an image and then appliers a series of ‘classifiers’ to it to extract face-like features from the image. If the image contains enough evidence of face-like features then the OpenCV code returns the coordinates of the bounding rectangle around what it thinks is the face.

OpenCV in leJOS 0.9.1 has two types of face classifiers available: Haar classifiers and LBP feature classifiers. The Haar classifiers are slow but more accurate. In my tests the Haar classifier could reliably detect multiple faces against a bright background, but it was slow! So slow that I was seeing detection times of the order of 1 second per frame (i.e. a frame-rate of 1 fps). This is too slow to reliably control a robot’s motion. The LBP feature classifiers are faster, but less accurate. With an LBP feature classifier I can get on the order of 8-10 fps on the EV3 on a 160×120 image from a webcam. The problem is face detection is very unreliable at this frame-rate, with faces rarely detected even when perfectly lit.

For a great overview of OpenCV and tutorials (in C++) of many functions take a look here:

Reading from the webcam

I’m using a cheap webcam plugged into a USB hub plugged into the host port on the EV3. leJOS has native support for webcams since the 0.9.0 release, and with OpenCV that support is now present in the OpenCV library too. 

We start by initialising and opening the camera device. You’ll have to set the image width and height here too. I set it to a very small size of 160×120 pixels. The camera can return larger images, but the bigger the image the slower it is to process on the EV3!

VideoCapture vid = new VideoCapture(0);

vid.set(Highgui.CV_CAP_PROP_FRAME_WIDTH, 160);

vid.set(Highgui.CV_CAP_PROP_FRAME_HEIGHT, 120);;

System.out.println(“Camera open”);   


Once the camera has been opened reading from it is as simple as:

Mat frame = new Mat();;     

if (!frame.empty()) {


A Mat is an OpenCV data structure akin to a matrix – it is like a ‘magic bucket’ that you can store almost any data in for OpenCV to process. 

Detecting faces

The OpenCV library in Java provides the detectMultiScale() function to detect face images. It takes a number of parameters, and you’ll have to play around with the parameters to get it to work reliably. Face detection is more of an art than a science!

The first thing we want to do is convert the colour image from the webcam into a grayscale image, and then equalise the histogram of the image. While OpenCV can process colour images it seems you’ll get more reliable results from a grayscale image.

Mat mRgba=new Mat();  

Mat mGrey=new Mat();  



Imgproc.cvtColor( mRgba, mGrey, Imgproc.COLOR_BGR2GRAY);  

Imgproc.equalizeHist( mGrey, mGrey );                  


Then we can actually do the face detection. Drum roll please….

private final String features = “/lbpcascade_frontalface.xml”;

MatOfRect faces = new MatOfRect();

faceDetector = new CascadeClassifier(getClass().getResource(features).getPath());










I’m using the lbpcascade face detector for this example because it runs quickly on the EV3 CPU and gives reasonable (but not great) accuracy. You will need to copy the file lbpcascade_frontalface.xml from the opencv/data/lbpcascades directory into the same directory on the EV3 as the jar file you are running (usually into /home/lejos/programs).

That’s a lot of parameters. So what do they all mean?

  • mGrey: This is the input frame that we want to detect a face in. In this case it’s the grayscale image.
  • faces: A matrix of rectangles; OpenCV returns each face as a rectangle as the bounding box around where it thinks the face is located. 
  • scaleFactor: how much scaling is applied to the image. The lower the number the more accurate the detection, but as a consequence the slower it goes. I started with a value of 2.0 for scale factor, and then reduced it down to 1.8 to get better accuracy but a reasonable detection time.
  • minNeighbours: controls accuracy. I just leave this at 2
  • flags: leave this at 0
  • minSize: the minimum size for a face. I set this to 20×20 to allow small faces to be detected. 
  • maxSize: the maximum size for a face. I set it to 160×120 in case the face takes up the whole image.

Interpreting the output and Drawing faces

Once you’ve run detectMultiScale() you’ll need to find the faces, if any were detected. But how? Recall the output is stored in the faces matrix, so we convert that into an array and see if it has any elements in it. If it does we iterate over each element, which is a rectangle, and then draw a rectangle on the image frame so we can display it in a web browser.

int numFaces = faces.toArray().length;
if(numFaces > 0) {
  System.out.println(String.format("+++ Detected %s faces", numFaces));
  // each rectangle in faces is a face
  Rect[] facesArray = faces.toArray();
  for (int i = 0; i < facesArray.length; i++) {
     Rect rect = facesArray[i];
Point center= new Point(rect.x + rect.width*0.5, rect.y + rect.height*0.5 );   
Core.rectangle(frame,,, new Scalar(0, 255, 0, 255), 2);

What does the output look like?

I opened up a web browser and connected to my EV3 on the wifi. The images look look like this (yes I usually look like that as I’m trying to watch the screen while holding my head still 🙂

Bounding rectangle on face capture

As I mentioned face detection is very sensitive to lighting and background in an image. In this case I had to move the camera so the skylight above my head wasn’t in the image. At night I found that the lights in the room were creating highlights which confused the camera. An ideal lighting setup is to have a dull matt background with soft lights coming from behind the camera at your face.

Putting it all together

As always you can get the source code on my github account: I’ve added a quick way to play around with the minNeighbours, scaleFactor and classifier type parameters on the command line. You can run the code as 

jrun -cp HTTPFaceDetect.jar HTTPFaceDetect <minNeighbours> <scale> <l/h>
for example
jrun -cp HTTPFaceDetect.jar HTTPFaceDetect 1 2 l

Will give very fast but not accurate detection, whereas:

jrun -cp HTTPFaceDetect.jar HTTPFaceDetect 1 1.1 h

Will be very accurate using a Haar classifier, but will be very (very) slow on the EV3. Experiment and see what works for you.